As one of the steps for the previous problem I've faced, I need to see the logs for some Lambda#Edge but I cannot find them anywhere.
According to the documentation on Lambda#Edge:
When you review CloudWatch log files or metrics when you're
troubleshooting errors, be aware that they are displayed or stored in
the Region closest to the location where the function executed. So, if
you have a website or web application with users in the United
Kingdom, and you have a Lambda function associated with your
distribution, for example, you must change the Region to view the
CloudWatch metrics or log files for the London AWS Region.
The lambda function I'm trying to find the logs for is located in us-east-1 (mandated by CloudFront since it is used as a distribution's event handler) while I'm in Canada so I assume the closest region would be ca-central-1. But since I'm not developing in ca-central-1, I don't have any log groups in that region. In any case, I don't see the logs for my Lambda#Edge. For the sake of completeness, I checked all the regions and I couldn't find any trace of logs for the lambda function. To be clear, I'm looking for a log group with the lambda function's name.
I'm positive that there should be logs since I have console.log() in my code and also I can download the content requested (the lambda function is in charge of selecting the S3 bucket holding the contents) which means the lambda function was successfully executed. If it wasn't, I should have not been able to get the S3 content.
Where can I find the logs for my Lambda#Edge function?
For anyone else who might be facing the same issue, use the script mentioned in the same documentation page to find your log groups:
FUNCTION_NAME=function_name_without_qualifiers
for region in $(aws --output text ec2 describe-regions | cut -f 4)
do
for loggroup in $(aws --output text logs describe-log-groups --log-group-name "/aws/lambda/us-east-1.$FUNCTION_NAME" --region $region --query 'logGroups[].logGroupName')
do
echo $region $loggroup
done
done
Create a file, paste the above script in it, replace the function_name_without_qualifiers with your function's name, make it executable and run it. It will find you the regions and log groups for your Lambda#Edge. The lesson learnt here is that the log group is not named like ordinary log groups. Instead it follows this structure:
/aws/lambda/${region}.${function_name}
It seems that the format of log describe-log-groups has also changed. When I tryed the script, it returned nothing. But with "/aws/lambda/$FUNCTION_NAME" instead of "/aws/lambda/us-east-1.$FUNCTION_NAME" the script returns the list of group with the following structure:
${region} /aws/lambda/${function_name}
Last but not least would be good to check Lambda's role permissions.
In my case that was the problem, because by default it allowed writing logs only to 1 region (us-east-1).
Here is how my policy looks like now:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:*:{account-id}:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:{account-id}:log-group:/aws/lambda/{function-name}:*"
]
}
]
}
{account-id} - your AWS Account ID
Related
I have RDS SQL server instance and it has the default sqlserver_audit parameter group, but I am not seeing any recent events. What is the issue?
A screen shot of what I am seeing:
Events generated from sqlserver_audit parameter group (HIPAA audit) are not directly visible to you in AWS Console. For more info about HIPAA audit implementation in RDS for SQL Server see this AWS forum post.
When you want to see events from your SQL Server audits, you need to use SQLSERVER_AUDIT option. In that case, RDS will stream data from audits on your RDS instance to your S3 bucket. You can also configure retention time, during which those .sqlaudit files are kept on RDS instance and you can access them by msdb.dbo.rds_fn_get_audit_file. For more info see documentation.
In both cases, "Recent events" will contain only important messages related to your instance, not audited events. So for example, whenever RDS can't access your S3 bucket for writing in order to store your audits, it will tell you so in "Recent events".
Vasek's answer helped me understand why I wasn't seeing logs show up in my s3 bucket and it was because the inline IAM policy attached to my IAM role used to transfer the audit logs was incorrect.
If you use the automated options-group creation wizard to add the SQLSERVER_AUDIT option to your RDS instance, be sure you don't include a trailing slash on your s3 key prefix.
The incorrect IAM policy statement the AWS option group creation wizard created is shown below.
{
"Effect": "Allow",
"Action": [
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-audit-logs-bucket/audits//*" # <---- INCORRECT
]
}
I changed my SQLSERVER_AUDIT options group to use the bucket's root and changed the IAM policy to the following correct configuration shown below and my audit logs started showing up in my S3 buck
{
"Effect": "Allow",
"Action": [
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-audit-logs-bucket/*"
]
}
From the docs:
RDS uploads the completed audit logs to your S3 bucket, using the IAM role that you provide. If you enable retention, RDS keeps your audit logs on your DB instance for the configured period of time.
So the log evens will be in S3, assuming all permissions are set correctly, not in the RDS Events console.
I'm trying to transfer a domain name from one AWS account to another AWS account using AWS CLI. When I try to transfer the domain I get the following error:
Connect timeout on endpoint URL: "https://route53domains.eu-west-1.amazonaws.com/"
I'm using the following command to transfer the domain
aws route53domains transfer-domain-to-another-aws-account --domain-name <value> --account-id <value> --profile personal
I checked the aws config file and it looks fine. It looks like this:
[profile personal]
aws_access_key_id = somekey
aws_secret_access_key = somesecretkey
region = us-west-2
I've also made sure that the user has the correct permission. The user has the following permission
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "route53domains:*",
"Resource": "*"
}
]
}
and also has the AdministratorAccess AWS managed policy.
To make sure I can communicate with AWS. I ran a simple command aws s3 ls --profile personal and it works. AWS responds back with the contents of S3.
The version of AWS CLI I have installed is
aws-cli/2.0.18 Python/3.7.4 Darwin/19.4.0 botocore/2.0.0dev22
I'm not sure where I'm going wrong.
You will need to specify --region us-east-1 because Amazon Route 53 is a global service.
I was going to rebuff the answer given by John Rotenstein but, on closer examination, it is indeed correct. It could do with some more detail though, so I shall elaborate.
The OP didn't miss anything, this need for --region us-east-1 (and only us-east-1) to be included in the command is not mentioned in either the route53domains docs or the docs of the subcommand, transfer-domain-to-another-aws-account. It does pop up on the list-operations page but even there, it's not as noticable as it could be^^.
^^: You might ask yourself, why it isn't builtin to the awscli to default to us-east-1 for this set of commands anyway, since there are no other options?
I'm having trouble triggering my AWS Lambda function.
The function works perfectly fine when I click Test, but I've created a new scheduled rule which triggers the Lambda function every minute. It works once, and then never again. I've also tried to use Cron, same results.
The logs should output a print function, but instead they read this:
02:07:40
START RequestId: |numbers| Version: 8
02:07:40
END RequestId: |numbers|
I've clicked Enable on 'CloudWatch Events will add necessary permissions for target(s) so they can be invoked when this rule is triggered.', so I suspect that my permissions aren't an issue.
As a side note, I've done everything on the console and am not really sure how to properly use the CLI. Any help would be wonderful. Thank you.
The best way is to start simple, then build-up to the ultimate goal.
Start by creating an AWS Lambda function that simply prints something to the log file. Here is an example in Python:
def lambda_handler(event, context):
print ('Within function')
Then, ensure that the function has been assigned an IAM Role with the AWSLambdaBasicExecutionRole policy, or another policy that grants access to CloudWatch Logs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
Then, configure CloudWatch Events to trigger the function once per minute and check the log files in Amazon CloudWatch Logs to confirm that the function is executing.
This will hopefully work correctly. It's then just a matter of comparing the configurations to find out why the existing function is not successfully running each minute. You can also look at the Monitoring tab to see whether any executions produced errors.
OK, here's where I went wrong:
According to this answer: https://forums.aws.amazon.com/thread.jspa?threadID=264583 AWS only runs the entire S3 zip package once. I needed to put all of my code into the handler to fix this.
I'm currently working on a lambda#edge function.
I cannot find any logs on CloudWatch or other debugging options.
When running the lambda using the "Test" button, the logs are written to CloudWatch.
When the lambda function is triggered by a CloudFront event the logs are not written.
I'm 100% positive that the event trigger works, as I can see its result.
Any idea how to proceed?
Thanks ahead,
Yossi
1) Ensure you have provided permission for lambda to send logs to cloudwatch. Below is the AWSLambdaBasicExecutionRole policy which you need to attach to the exection role which you are using for your lambda function.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
2) Lambda creates CloudWatch Logs log streams in the CloudWatch Logs regions closest to the locations where the function is executed. The format of the name for each log stream is /aws/lambda/us-east-1.function-name where function-name is the name that you gave to the function when you created it. So ensure you are checking the cloudwatch logs in the correct REGION.
In case anyone finds it useful.
The fact that AWS prefixes your function name, which breaks the built-in "CloudWatch at a glance" Dashboard, and that Lambda#Edge runs across multiple regions inspired me to create this CloudWatch Dashboard template that gives you similar standard monitoring for all regions in one dashboard.
With the latest Aurora update (1.8), the command LOAD DATA FROM S3 was introduced. Has anyone gotten this to work? After upgrading to 1.8, I followed the setup guide Here to create the Role to allow access from RDS to S3.
After rebooting the server and trying to run the command
LOAD DATA FROM S3 PREFIX 's3://<bucket_name>/prefix' INTO TABLE table_name
in SQL Workbench/J, I get the errors:
Warnings:
S3 API returned error: Missing Credentials: Cannot instantiate S3 Client
S3 API returned error: Failed to instantiate S3 Client
Internal error: Unable to initialize S3Stream
Are there any additional steps required? Can I only run this from the SDK? I don't see that mentioned anywhere in the documents
I had the same issue. I tried adding AmazonS3FullAccess to the IAM role that my RDS instances were using...no joy.
After poking around, I went into the RDS console, to Clusters. Selected my Aurora cluster and clicked Manage IAM Roles. It gave me a drop-down, I selected the IAM role (same one that the individual instances were using).
Once I did that, all was well and data load was nice and fast.
So, there are (for us) 5 steps/components:
1) The S3 bucket and bucket policy to allow a user to upload the object
{
"Version": "2012-10-17",
"Id": "Policy1453918146601",
"Statement": [
{
"Sid": "Stmt1453917898368",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account id>:<user/group/role>/<IAM User/Group/Role>"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket name>/*"
}
]
}
The "Principal" would be whatever IAM user, group or role will be uploading the data files to the bucket so that the RDS instance can import the data.
2) The IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1486490368000",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket name>/*"
]
}
]
}
This is pretty simple with the Policy Generator.
3) Create the IAM Role:
This role should be assigned the IAM policy above. You can probably do an inline policy, too, if you're not going to use this policy for other roles down the line, but I like the idea of having a defined policy that I can reference later if I have a need.
4) Configure a Parameter Group that your cluster/instances will use to set the aws_default_s3_role value to the ARN of the role from #3 above.
5) Configure the Aurora Cluster by going to Clusters, selecting your cluster, selecting Manage IAM Roles and setting the IAM Role for your DB Cluster
At least for me, these steps worked like a charm.
Hope that helps!
If the only error is Internal error: Unable to initialize S3Stream and it throws this error immediately, possible culprits are:
typo in the bucket or object name
bucket created in different region than database
bucket or object name is not specified according to the syntax for specifying a path to files stored on an Amazon S3 bucket: s3-region://bucket-name/file-name-or-prefix
The path includes the following values:
region (optional) – The AWS Region that contains the Amazon S3 bucket to load from. This value is optional. If you don't specify a region value, then Aurora loads your file from Amazon S3 in the same region as your DB cluster.
bucket-name – The name of the Amazon S3 bucket that contains the data to load. Object prefixes that identify a virtual folder path are supported.
file-name-or-prefix – The name of the Amazon S3 text file or XML file, or a prefix that identifies one or more text or XML files to load. You can also specify a manifest file that identifies one or more text files to load.
After all the suggestions above, as a final step, I had to add a VPC Endpoint to S3. After that, everything started working.
March 2019:
RDS console doesn't have the option to change role anymore. What worked for me is to add the role via CLI and then reboot the writer instance.
aws rds add-role-to-db-cluster --db-cluster-identifier my-cluster --role-arn arn:aws:iam::123456789012:role/AllowAuroraS3Role
For me, I was missing the step to add the created RDS role to my S3 bucket. Once I add it, it worked instantly.
You need to attach the AmazonS3ReadOnlyAccess or AmazonS3FullAccess policy to the role you set up in IAM. This step was not included in the setup guide.
Go to IAM -> Roles in the AWS console, select the role you are using, click 'attach policy', then scroll way down to the S3 policies and pick one.
I reached out to Amazon Aurora team and they confirmed there are edge cases with some of the servers having this issue. They are rolling out a patch to fix the issue soon, but in the mean time manually applied the patch to my cluster.
I had experienced multiple occasions this error could occur.
The error was thrown after running 'LOAD' sql for a while (around 220s), which is a suspicious time-out case. Finally I found my RDS's Subnet Group only have one outbound excluding the one to S3. By adding the outbound rule can fix this issue.
The error was thrown immediately (0.2s). I was successfully loading data from S3 before, but suddenly with a change on the S3 url, this error occurred again. I was using a wrong S3 URL. Because I wanted to use S3 prefix instead of file. check the 'Load' syntax to make your sql right.
It worked for me by following step 2 to 5 and by creating VPC endpoint for S3 access.
I had the same error as I was trying to LOAD DATA FROM S3 using MySQL Workbench. I was already able to successfully CREATE DATABASE and CREATE TABLE and so I knew my connection was working.
I closely followed all of the AWS documentation instructions for Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket.
In my case, I had not correctly followed instruction steps 3 & 4 (See list of instructions under subheading "Giving Aurora access to Amazon S3" at link above.
What fixed it for me:
From Amazon RDS, I selected "Parameter Groups" in the navigation
pane on the left.
Then I clicked on my newly created custom DB cluster parameter
group (step 3 from the link above).
From within my custom group, I searched for
aurora_load_from_s3_role and then in the "Values" entry box, I
copy/pasted the ARN for the Role that I had just created in step 2 of the
instructions into this box and clicked Save (step 4 from the link above).
I went back to MySQL Workbench and reran my LOAD DATA FROM S3 command and it worked!