An AWS SQS queue URL looks like this:
sqs.us-east-1.amazonaws.com/1234567890/default_development
And here are the parts broken down:
Always same | Stored in env var | Always same | ? | Stored in env var
sqs | us-east-1 | amazonaws.com | 1234567890 | default_development
So I can reconstruct the queue URL based on things I know except the 1234567890 part.
What is this number and is there a way, if I have my AWS creds in env vars, to get my hands on it without hard-coding another env var?
The 1234567890 should be your AWS account number.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ImportantIdentifiers.html
If you don't have access to the queue URL directly (e.g. you can get it directly from CloudFormation if you create it there) you can call the GetQueueUrl API. It takes a parameters of the QueueName and optional QueueOwnerAWSAccountId. That would be the preferred method of getting the URL. It is true that the URL is a well formed URL based on the account and region, and I wouldn't expect that to change at this point. It is possible that it would be different in a region like the China regions, or the Gov Cloud regions.
Related
When using aws configure, the credentials are stored on my workstation in clear text. This is a HUGE security violation. I tried opening an issue at the aws cli github and it was summarily closed. I am using Terraform AND the aws cli directly, so a work-aroundneeds to support this.
Example:
[MyProfile]
aws_access_key_id = xxxxxxxxxxxxxxx
aws_secret_access_key = yyyyyyyyyyyyyyyyyy
region=us-east-2
output=json
This is the simplest work-around I could find.
References:
https://devblogs.microsoft.com/powershell/secretmanagement-and-secretstore-are-generally-available/
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.secretmanagement/?view=ps-modules
The following powershell creates an encrypted vault.
#This will destroy existing AWS vault
#The Vault will be set accessible to the current User with no password.
#When AWS CLI invokes this there is no way to request a password.
Install-Module Microsoft.PowerShell.SecretManagement
Install-Module Microsoft.PowerShell.SecretStore
Set-SecretStoreConfiguration -Authentication None -Scope CurrentUser -Interaction None
Register-SecretVault -Name "AWS" -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault -AllowClobber
Set-Secret -Vault "AWS" -Name "test" -Secret "test"
Get-SecretVault
Write-Host "Vault Created"
This powershell can create the secret. Notice it is possible to expire the secret.
$profile = Read-Host -Prompt "Enter AWS Account Number"
$aws_access_key_id = Read-Host -Prompt "Enter AWS access key"
$aws_secret_access_key = Read-Host -Prompt "Enter AWS secret access key"
$secretIn = #{
Version=1;
AccessKeyId= $aws_access_key_id;
SecretAccessKey=$aws_secret_access_key;
SessionToken= $null; #"the AWS session token for temporary credentials";
#Expiration="ISO8601 timestamp when the credentials expire";
}
$secret = ConvertTo-Json -InputObject $secretIn
Set-Secret -Name $profile -Secret $secret
This file named credential_process.cmd needs to located on the path or next to terrform.exe.
#echo off
REM This file needs to be accessible to the aws cli or programs using it.
REM To support other paths, copy it to C:\Program Files\Amazon\AWSCLIV2
Powershell.exe -Command "Get-Secret -Vault AWS -Name %1 -AsPlainText "
Finally in your {user}.aws\credentials file place the following entry:
[XXXXX-us-east-1]
credential_process = credential_process.cmd "XXXXX"
region=us-east-1
output=json
Now you can run an aws cli command (or Terraform) using:
aws ec2 describe-vpcs --profile XXXXX-us-east-1
Drawbacks:
There is no way to prevent a user from using the simple aws configure statement and storing credentials in the clear.
There is no way to force an admin to use this method.
Like everything else AWS:
The complexity it unnecessary.
The documentation is very detailed, but somehow always missing important information.
Everything is a hack-job.
Possibilities:
It is possible to create a user (User1) that has access only to a certain secret in secret manager (User2 credentials).
User1 credentials are stored in the local Vault.
User1 would fetch the User2 credentials to be used from Secret Manager during invokation of credential_process.cmd
Person is never given the User2 credentials directly.
This would force the user to use method above.
However, the implementation of this should be in the aws configure, not hacked together. This would allow other dependent tools to just work once the configuration is complete.
I came across this link a while back and thought it was excellent in explaining all the different options that you can try to solve the problem that you described above.
Never put AWS temporary credentials in the credentials file (or env vars)—there’s a better way | by Ben Kehoe | Oct, 2021 | Medium
This is a HUGE security violation. I tried opening an issue at the aws cli github and it was summarily closed.
Running on AWS you can use the instance role (for EC2, Lambda or ECS).
Running outside AWS there is not much better option. If someone get access to the home directory, it's not your computer anymore. However - the credentials can be as well passed as env variables or cli/api parameters.
These can be encrypted and decrypted or requested when to be used, but still you need access to the decryption key or service.
you can actually use something like aws-vault: it stores the secrets in the local keychain, and basically creates a temporary shell with the creds as env variables, or you can just exec a specific command without creating a whole shell.
also another similar tool is vaulted that stores credentials in an encrypted file and creates a temporary shell session when you wanna use it
How to use AWS services like CloudTrail or CloudWatch to check which user performed event DeleteObject?
I can use S3 Event to send a Delete event to SNS to notify an email address that a specific file has been deleted from the S3 bucket but the message does not contain the username that did it.
I can use CloudTrail to log all events related to an S3 bucket to another bucket, but I tested and it logs many details, and only event PutObject but not DeleteObject.
Is there any easy way to monitor an S3 bucket to find out which user deleted which file?
Upate 19 Aug
Following Walt's answer below, I was able to log the DeleteObject event. However, I can only get the file name (requestParameters.key
) for PutObject, but not for DeleteObjects.
| # | #timestamp | userIdentity.arn | eventName | requestParameters.key |
| - | ---------- | ---------------- | --------- | --------------------- |
| 1 | 2019-08-19T09:21:09.041-04:00 | arn:aws:iam::ID:user/me | DeleteObjects |
| 2 | 2019-08-19T09:18:35.704-04:00 | arn:aws:iam::ID:user/me | PutObject |test.txt |
It looks like other people have had the same issue and AWS is working on it: https://forums.aws.amazon.com/thread.jspa?messageID=799831
Here is my setup.
Detail instructions on setting up CloudTrail in the console. When setting up the CloudTrail double check these 2 options.
That your are logging S3 writes. You can do this for all S3 buckets or just the one you are interested. You also don't need to enable read logging to answer this question.
And you are sending events to CloudWatch Logs
If you made changes to the S3 write logging you might have to wait a little while. If you haven't had breakfast, lunch, snack, or dinner now would be a good time.
If you're using the same default CloudWatch log group as I have above this link to CloudWatch Insight Logs search should work for you.
This is a query that will show you all S3 DeleteObject calls. If the link doesn't work
Got to CloudWatch Console.
Select Logs->Insights on the left hand side.
Enter value for "Select log group(s)" that you specific above.
Enter this in the query field.
fields #timestamp, userIdentity.arn, eventName, requestParameters.bucketName, requestParameters.key
| filter eventSource == "s3.amazonaws.com"
| filter eventName == "DeleteObject"
| sort #timestamp desc
| limit 20
If you have any CloudTrail S3 Delete Object calls in the last 30 min the last 20 events will be shown.
As of 2021/04/12, CloudTrail does not record object key(s) or path for DeleteObjects calls.
If you delete an object with S3 console, it always calls DeleteObjects.
If you want to access object keys for deletion you will need to delete individual files with DeleteObject (minus s). This can be done with AWS CLI (aws s3 rm s3://some-bucket/single-filename) or direct API calls.
I am trying to find out all EC2 instances in 10 different accounts which are running non-amazon AMI images. Following CLI command gives me the list of all AMI's:
aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[ImageId]' | sort | uniq -c
I think I can modify this further to get all non-amazon AMI's but is there a way to run this across 10 different accounts in one call?
is there a way to run this across 10 different accounts in one call?
No, that's not possible. You need to write a loop that iterates over each account, calling ec2 describe-instances once for each account.
Here's a script that can find instances using AMIs where the Owner is not amazon:
import boto3
ec2_client = boto3.client('ec2', region_name='ap-southeast-2')
instances = ec2_client.describe_instances()
# Get a set of AMIs used on all the instances
images = set(i['ImageId'] for r in instances['Reservations'] for i in r['Instances'])
# Find which of these are owned by Amazon
amis = ec2_client.describe_images(ImageIds=list(images), Owners=['amazon'])
amazon_amis = [i['ImageId'] for i in amis['Images']]
# Which instances are not using Amazon images?
non_amazon_instances = [(i['InstanceId'], i['ImageId']) for r in instances['Reservations'] for i in r['Instances'] if i['ImageId'] not in amazon_amis]
for i in non_amazon_instances:
print(f"{i[0]} uses {i[1]}")
A few things to note:
Deprecated AMIs might not have accessible information, so might be marked a non-Amazon.
This script, as written, only works on one region. You could change it to loop through regions.
This script, as written, only works on one account. You would need a way to loop through credentials for other accounts.
Use AWS config
Create an agregator in root or delegated account(wait for the agregator to load)
Create query
SELECT
accountId,
resourceId,
configuration.keyName,
availabilityZone
WHERE
resourceType = 'AWS::EC2::Instance'
AND configuration.state.name = 'running'
more details
https://aws.amazon.com/blogs/mt/org-aggregator-delegated-admin/
Is there a way to stream an AWS Log Group to multiple Elasticsearch Services or Lambda functions?
AWS only seems to allow one ES or Lambda, and I've tried everything at this point. I've even removed the ES subscription service for the Log Group, created individual Lambda functions, created the CloudWatch Log Trigger, and I can only apply the same CloudWatch Log trigger on one Lambda function.
Here is what I'm trying to accomplish:
CloudWatch Log Group ABC -> No Filter -> Elasticsearch Service #1
CloudWatch Log Group ABC -> Filter: "XYZ" -> Elasticsearch Service #2
Basically, I need one ES cluster to store all logs, and another to only have a subset of filtered logs.
Is this possible?
I've ran into this limitation as well. I have two Lambda's (doing different things) that need to subscribe to the same CloudWatch Log Group.
What I ended up using is to create one Lambda that subscribes to the Log Group and then proxy the events into an SNS topic.
Those two Lambdas are now subscribed to the SNS topic instead of the Log Group.
For filtering events, you could implement them inside the Lambda.
It's not a perfect solution but it's a functioning workaround until AWS allows multiple Lambdas to subscribe to the same CloudWatch Log Group.
I was able to resolve the issue using a bit of a workaround through the Lambda function and also using the response provided by Kannaiyan.
I created the subscription to ES via the console, and then unsubscribed, and modified the Lambda function default code.
I declared two Elasticsearch endpoints:
var endpoint1 = '<ELASTICSEARCH ENDPOINT 1>';
var endpoint2 = '<ELASTICSEARCH ENDPOINT 2>';
Then, declared an array named "endpoint" with the contents of endpoint1 and endpoint2:
var endpoint = [endpoint1, endpoint2];
I modified the "post" function which calls the "buildRequest" function that then references "endpoint"...
function post(body, callback) {
for (index = 0; index < endpoint.length; ++index) {
var requestParams = buildRequest(endpoint[index], body);
...
So every time the "post" function is called it cycles through the array of endpoints.
Then, I modified the buildRequest function that is in charge of building the request. This function by default calls the endpoint variable, but since the "post" function cycles through the array, I renamed "endpoint" to "endpoint_xy" to make sure its not calling the global variable and instead takes the variable being inputted into the function:
function buildRequest(endpoint_xy, body) {
var endpointParts = endpoint_xy.match(/^([^\.]+)\.?([^\.]*)\.?([^\.]*)\.amazonaws\.com$/);
...
Finally, I used the response provided by Kannaiyan on using the AWS CLI to implement the subscription to the logs, but corrected a few variables:
aws logs put-subscription-filter \
--log-group-name <LOG GROUP NAME> \
--filter-name <FILTER NAME>
--filter-pattern <FILTER PATTERN>
--destination-arn <LAMBDA FUNCTION ARN>
I kept the filters completely open for now, but will now code the filter directly into the Lambda function like dashmug suggested. At least I can split one log to two ES clusters.
Thank you everyone!
Seems like AWS console limitation,
You can do it via command line,
aws logs put-subscription-filter \
--log-group-name /aws/lambda/testfunc \
--filter-name filter1 \
--filter-pattern "Error" \
--destination-arn arn:aws:lambda:us-east-1:<ACCOUNT_NUMBER>:function:SendToKinesis
You also need to add permissions as well.
Full detailed instructions,
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Hope it helps.
As of September 2020 CloudWatch now allows two subscriptions to a single CloudWatch Log Group, as well as multiple Metric filters for a single Log Group.
Update: AWS posted October 2, 2020, on their "What's New" blog that "Amazon CloudWatch Logs now supports two subscription filters per log group".
Is it possible to provide the credential in each request in a way like
aws sns create-topic my_topic --ACCESS-KEY XXXX --SECRET-KEY XXXX
Instead of doing aws configure before I make the call.
I know that credential management can be done by using --profile like Using multiple profiles but that requires me to save the credential, which I cannot do. I'm depending on the user to provide me the key as parameter input. Is it possible?
I believe the closest option to what you are looking for would be to set the credentials as environment variables before invoking the AWS CLI.
One option is to export the environment variables that control the credentials and then call the desired CLI. The following works for me in bash:
$ export AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=YhTYxxxxxxxxxxxxxxVCSi; aws sns create-topic my_topic
You may also want to take a look at: Configuration Settings and Precedence
There is another way. Instead of "export"ing, just run the command like:
AWS_ACCESS_KEY_ID=AAAA AWS_SECRET_ACCESS_KEY=BBB aws ec2 describe-regions
This will ensure that the credentials are set only for the command.
Your best bit would be to use IAM Role for Amazon ec2 instance. That way you don't need to worry about the credentials at all. Also they keys will be rotated periodically.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html