How to configure aws credentials to setup cloudwatch with fluentbit - amazon-web-services

I need to send logs to cloudwatch using fluentbit, from the application hosted on my local system, but i am unable to configure the aws credentials for fluent bit to send logs to cloudwatch.
It will be of great help if anyone can help me with the same.
Some of the logs are as follows:-
[aws_credentials] Initialized Env Provider in standard chain
[aws_credentials] Failed to initialized profile provider: $HOME not set and AWS_SHARED_CREDENTIALS_FILE not set.
[aws_credentials] Not initializing EKS provider because AWS_ROLE_ARN was not set
[aws_credentials] Initialized EC2 Provider in standard chain
[aws_credentials] Not initializing ECS Provider because AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is not set
[aws_credentials] Sync called on the EC2 provider
[aws_credentials] Init called on the env provider
[aws_credentials] Init called on the EC2 IMDS provider
[aws_credentials] requesting credentials from EC2 IMDS

Any standard way to pass credentials should work here:
export environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
or create ~/.aws/credentials, may use aws configure prompt for this.

During my testing the variable weren't enough same for the credentials file.
I installed AWS cli configure it with the keys and now it works as expected. I am working with containers and the AWS cli add extra size that I don't need so if anyone knows a way to do it without it. that would be awesome.

Related

AWS Batch Job application not being able to send SNS notification

I have an AWS Batch Job which is a .NET CORE app running as a container which downloads from an SFTP server a CSV parses it and inserts data into AWS RDS.
When the CSV is corrupt the job is failing and is supposed to send a SNS notification, instead I see the following error in CloudWatch logs.
"Message": "User: arn:aws:sts::654001826221:assumed-role/fileimportworker-batch/5f77c736e4e64c2d82df278800ec4f25 is not authorized to perform: SNS:Publish on resource: arn:aws:sns:eu-west-1:accountIdHere:Test-SNS-Batch",
My IAM role attached to the batch Job role has SNS:Published allowed, S3 allowed, also provides read access to 2 secrets in Secret Manager. S3 and SecretManager access work, the task is able to download the file from SFTP and put it to S3 and also to read the RDS password from secret manager.
AWS Batch Job may use credentials from a container instead of your environment variables. You have to look at credential precedence.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence
The AWS CLI uses credentials and configuration settings located in
multiple places, such as the system or user environment variables,
local AWS configuration files, or explicitly declared on the command
line as a parameter. Certain locations take precedence over others.
The AWS CLI credentials and configuration settings take precedence in
the following order:
Command line options – Overrides settings in any other location. You
can specify --region, --output, and --profile as parameters on the
command line.
Environment variables – You can store values in your system's
environment variables.
CLI credentials file – The credentials and config file are updated
when you run the command aws configure. The credentials file is
located at ~/.aws/credentials on Linux or macOS, or at
C:\Users\USERNAME.aws\credentials on Windows. This file can contain
the credential details for the default profile and any named profiles.
CLI configuration file – The credentials and config file are updated
when you run the command aws configure. The config file is located at
~/.aws/config on Linux or macOS, or at C:\Users\USERNAME.aws\config
on Windows. This file contains the configuration settings for the
default profile and any named profiles.
Container credentials – You can associate an IAM role with each of
your Amazon Elastic Container Service (Amazon ECS) task definitions.
Temporary credentials for that role are then available to that task's
containers. For more information, see IAM Roles for Tasks in the
Amazon Elastic Container Service Developer Guide.
Instance profile credentials – You can associate an IAM role with each
of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Temporary
credentials for that role are then available to code running in the
instance. The credentials are delivered through the Amazon EC2
metadata service. For more information, see IAM Roles for Amazon EC2
in the Amazon EC2 User Guide for Linux Instances and Using Instance
Profiles in the IAM User Guide.
P.S. To intergate AWS Batch with SNS without coding, you can use the Eventbridge rule to listen to event patterns from AWS Batch. You just select the target of the rule to publish the message on the SNS topic you want.
https://docs.aws.amazon.com/batch/latest/userguide/batch_sns_tutorial.html

Serverless Error: The security token included in the request is invalid

when i type serverless deploy appear this error:
ServerlessError: The security token included in the request is invalid.
I had to specify sls deploy --aws-profile in my serverless deploy commands like this:
sls deploy --aws-profile common
Can you provide more information?
Make sure that you've got the correct credentials in ~/.aws/config and ~/.aws/credentials. You can set these up by running aws configure. More info here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration
Also make sure that the IAM user in question has as an attached security policy that allows access to everything you need, such as CloudFormation.
Create a new user in AWS (don't use the root key).
In the SSH keys for AWS CodeCommit, generate a new Access Key.
Copy the values and run this:
serverless config credentials --overwrite --provider aws --key bar --secret foo
sls deploy
In my case it was missing the localstack entry in the serverless file.
I had everything that should be inside it, but it was all inside custom (instead of custom.localstack).
In my case, I added region to the provider. I suppose it's not read from the credentials file.
provider:
name: aws
runtime: nodejs12.x
region: cn-northwest-1
In my case, multiple credentials are stored in the ~/.aws/credentials file.
And serverless is picking the default credentials.
So, I kept the new credentials under [default] and removed the previous credentials. And that worked for me.
to run the function from AWS you need to configure AWS with access_key_id and secret_access_key
but
to might get this error if you want to run the function locally
so for that use this command
sls invoke local -f functionName
it will run the function locally not on aws
If none of these answers work, it's maybe because you need to add a provider in your serverless account and add your AWS keys.

The AWS Access Key Id you provided does not exist in our records, but credentials was already set

Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.

What credentials does Boto3 use when running in AWS CodeBuild?

So I've written a set of deployment scripts that run in CodeBuild and use Boto3 to deploy some dockerised apps to ECS. The problem I'm having is when I want to deploy to our separate production account.
If I'm running the CodeBuild project from the dev account but want to create resources in the production account, it's my understanding that I should set up a role in the target account, allow the codebuild role to assume it, then call:
sts_client.assume_role(
RoleArn=arn_of_a_role_I_set_up,
RoleSessionName=some_name
)
This returns an access key, secret key, and session token. This works and returns what I'd expect.
Then what I want to do is just assign those values to these environment variables:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
This is because according to the documentation here: http://boto3.readthedocs.io/en/latest/guide/configuration.html Boto3 should defer to if you don't explicitly set those variables in the client or session methods.
However, when I do this the resources still get created in the same dev account.
Also, if I call printenv in the first part of my buildspec.yml before my scripts attempt to set the environment variables, those AWS key/secret/token variables aren't present at all.
So when it's running in CodeBuild, where is Boto3 getting its credentials from?
Is the solution just going to be to pass in a key/secret/token to every boto3.client() call to be perfectly sure?
The credentials in the CodeBuild environment are from the service role associated with your CodeBuild project. Boto and botocore will use the "ContainerProvider" automatically to grab those credentials in the CodeBuild environment.

Packer amazon-ebs : AuthFailure

For some reason Packer fails to authenticate to AWS, using plain aws client works though, and my environment variables are correctly set:
AWS_ROLE_SESSION_NAME=...
AWS_SESSION_TOKEN=...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE=...
AWS_ACCESS_KEY_ID=...
AWS_CLI=...
AWS_ACCOUNT=...
AWS_SECURITY_TOKEN=...
I am using authentication using aws saml, and Packer gives me the following:
Error querying AMI: AWS was not able to validate the provided access credentials (AuthFailure)
The problem lies within the way Packer authenticates with AWS.
Packer is written in go and uses goamz for authentication. When creating a config using aws saml, a couple of files are generated in ~/.aws : config and credentials.
Turns out this credentials file takes precedence over the environment variables, so if these credentials are incorrect and you rely on your environment variables, you will get the same error.
Since aws-saml needs aws_access_key_id and aws_secret_access_key to be defined, deleting the credentials file would not suffice in this case.
We had to copy these values into ~/.aws/config and delete the credentials file, then Packer was happy to use our environment variables.
A ticket has been raised in github for goamz so AWS CLI and Packer can have the same authenticating behavior, feel free to vote it up if you have the issue too : https://github.com/mitchellh/goamz/issues/171