"aws dynamodb list-tables" not showing the tables present - amazon-web-services

When I use:
aws dynamodb list-tables
I am getting:
{
"TableNames": []
}
I gave the region as default as I did the same while aws configure.
I also tried with specific region name.
When I check in AWS console also I don't see any DynamoDB tables, but i am able to access the table programmatically. Able to add and modify item as well.
But no result when enter I use aws dynamodb list-tables and also no tables when I check in console.

This is clearly a result of the commands looking in the wrong place.
DynamoDB tables are stored in an account within a region. So, if there is definitely a table but none are showing, then the credentials being used either belong to a different AWS Account or the command is being sent to the wrong region.
You can specify a region like this:
aws dynamodb list-tables --region ap-southeast-2
If you are able to access the table dynamically, the make sure the same credentials being used by your program are also being used for the AWS CLI.

We need to specify the endpoint in the command which will work . As the above dynamodb is used programmatically and used as wed app.
this command will work :
aws dynamodb list-tables --endpoint-url http://localhost:8080 --region us-west-2

Check the region you set up in AWS configuration vs what is displayed at the top of the AWS console. I had my app configured to us-east-2 but the AWS console had us-east-1 as the default. I was able to view my table once the correct region was selected in the AWS console.

Related

Copy/Export AWS Security Group to multiple AWS accounts

I have multi-account AWS environment (set up using AWS Landing Zone) and I need to copy a specific security group to all the accounts. I do have a CFT written, but it's too much of a repetitive task to do this one by one.
The security group is in the central (shared-services) account, which has access to all the other accounts. It's better if there's a way to integrate this to Account Vending Machine (AVM) in order to avoid future tasks of exporting the SG to newly spawned accounts.
You should use CloudFormation Stacksets. StackSets is a feature of cloudformation in which you have a master account in which you create/update/remove the stackset, and you have children accounts. In the stackset, you configure your children aws accounts you want to deploy the CF template and the region as well.
From your comment, your master account is going to be the shared-services and the rest of your accounts, the children ones. You will need to deploy a couple of IAM roles to allow cross-account access, but after that, you will be able to deploy all your templates in up to 500 aws accounts automatically and update them as well.
More information here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html
You can export Security Group and other configuration with CloudFormation using CloudFormer, which creates a template from the existing account configuration. Check the steps in this guide https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html It will upload the template on S3 and you can reuse it or some of its parts.
Since you are using AWS Landing Zone, you can add the security group to the aws_baseline templates, either as a new template or added to one of the existing files. When submitted, AWS Landing Zone uses Step Functions and AWS Stack Sets to deploy your SG to all existing and future accounts. If you choose to create the security group in a new file, you must add a reference to it in the manifest.yaml file (compare with how the other templates are referenced).
I was able to do this via the Account Vending Machine. But AGL's Stacksets should be a good alternative too.
To copy AWS Security Gp from one account of any region to other AWS account to any region is required lots of scripting(coding) in aws cli or boto3.
But one thing i done which is feasible to my usecase(Whitelist 14 IPs for HTTPS) is write a bash script
Here prior i create a blank SG on other AWS account(or u may use aws cli to create that too),
`
Region1=
SGFromCopy=
profilefromcopy=
Region2=
SGToCopy=
profiletocopy=
for IP in $(aws ec2 describe-security-groups --region $Region1 --group-id=$SGFromCopy --profile $profilefromcopy --query SecurityGroups[].IpPermissions[].IpRanges[].CidrIp --output text) ;
do
aws ec2 authorize-security-group-ingress --group-id=$SGToCopy --ip-permissions IpProtocol=TCP,FromPort=443,ToPort=443,IpRanges=[{CidrIp=$IP}] --profile $profiletocopy --region $Region2;
done ;
`
U may modify script if u have csv formated of SG and then just had iterated in while loop
BONUS
To get desire output you have to alter output in file or some where else
> aws ec2 describe-security-groups --region $Region1 --group-id $SGFromCopy --profile $profilefromcopy --query SecurityGroups[].IpPermissions[] --output text

AWS Cloudformation: Is it possible to access user id inside a Cloudformation template?

I would like to auto-tag certain AWS resources defined in a CloudFormation template with the user who uses the template to create a stack. Is it possible to access any sort of user id in the template?
I don't think this is possible.
AWS services are tied to AWS Accounts. Once IAM confirms that a particular user has permission to make an API call, resources that are generated become associated with an Account rather than a particular User. For example, it is not possible to look at an EC2 instance and determine who launched the instance.
This information is, however, available in AWS CloudTrail, but that is more of an audit log — it does not provide the user information back to the service.
So, I suspect that a stack is not provided with information about the User that launched it.
There is one way of doing it, but it isn't pretty and there is a caveat.
You can run a bash script like the below on an ec2 instance:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
USER_ID=`aws sts get-caller-identity --output text --query 'Arn'`
aws ec2 create-tags --resources "${AWS_INSTANCE_ID}" --tags 'Key=CreatedBy,Value="${USER_ID}"' --region eu-west-1
That will tag the current instance with the name of the user that run the CLI on that instance.
The caveat is however that the CLI needs to be run by the user - and not a role, so your users keys will have to be copied to the server - and then removed again at the end of the script.
Not ideal, but it gives you an option.

list-buckets s3api is not showing my bucket creation date?

I want to get my s3 bucket creation dates using s3api . But it is not showing the creation date that it is showing in aws console.
When i tried with cli the output is like this
C:\Users\hero>aws s3api list-buckets
{
"Buckets": [
{
"CreationDate": "2018-09-12T11:32:04.000Z",
"Name": "campaign-app-api-prod-serverlessdeploymentbucket-"
},
{
"CreationDate": "2018-09-12T10:06:44.000Z",
"Name": "s3-api-log-events"
}
]
}
In console
Why am i getting different dates in s3api. Is my CreationDate interpretation of is wrong ?
Any help is appreciated.
Thanks
The Date Created field displayed in the web console is according to the actual creation date registered in us-east-1, while the AWS CLI and SDKs will display the creation date depending on the specified region (or the default region set in your configuration).
When using an endpoint other than us-east-1, the CreationDate you receive is actually the last modified time according to the bucket's last replication time in this region. This date can change when making changes to your bucket, such as editing its bucket policy.
So, to get the CreationDates of the buckets that are in s3 console then you need to give the region us-east-1 .
Try like this in aws cli aws s3api list-buckets --region "us-east-1"
Checkout this github issue
This python script:
import boto3
client=boto3.client('s3')
response=client.list_buckets()
returns the same dates as the AWS CLI and s3cmd. Therefore, it is not a bug in the CLI/s3cmd. Instead, it is different information coming from the Amazon S3 API call. So, I'm not sure where the console gets the 'correct' dates.
If there is a bug anywhere, it would be in the ListBuckets API call to AWS. This is best raised with AWS Support.

"Delete Backup" Option is not found in AWS Console in us-east-2 region

I have created on-Demand Backup for a DynamoDB table(xyz) which is available in 'us-east-2' region. Unable to delete the backup from AWS Console. Please find the below attached file. we can't see the 'delete backup' button on AWS Console
I did not see what region you were working on. I was able to reproduce the scenario you described. Indeed, Delete Backup button is missing in UI. I believe this is AWS bug. Meanwhile, you can use CLI to delete the backup.
aws dynamodb delete-backup --region us-east-2 --backup-arn <<Backup_ARN>>
Now the AWS has added the "delete backup" option for the region 'us-east-2'.

Can I get a user's s3 canonical ID from the command line?

I'm creating special-purpose users for Amazon S3 access, for example to give out to a third-party service. The accounts don't have an email address or password. I was hoping I'd be able to pull the canonical ID of these accounts using the aws command-line tool.
One way I have read about is to create a bucket using their account, look at the acl for it, and extract the canonical ID from that, then delete the useless bucket and move on.
But for future use, is there an easier way?
If you run:
aws iam list-users
You get a list of all of your IAM users. One of the fields is UserId, which is defined as "The stable and unique string identifying the user".
If that is what you are looking for, then you can retrieve it with:
aws iam get-user --user-name <iam user name> --query 'User.UserId'
If you're looking for the canonical ID of account then use s3api::list-buckets
aws s3api list-buckets --query "Owner.ID"
Assumes you have setup the credentials somehow.
Source.
You should try this:
aws sts get-caller-identity