accessing swiftstack using keystone - endpoint

I have been trying to use keystone to connect to swift.
I want to create the endpoint using the command:
keystone endpoint-create --region $REGION --service-id $SWIFT_SERVICE --publicurl "http://$SWIFT_IP/v1/KEY_\$(tenant_id)s" --adminurl "http://$SWIFT_IP/v1" --internalurl "http://$SWIFT_IP/v1/KEY_\$(tenant_id)s"
I just want to ask, what should the publicurl, adminurl and internalurl be?

What you have there looks correct.

Related

What is the aws-cli command for AWS Macie to create a job?

actually I want to create a job in AWS macie using the aws cli.
I ran the following command:-
aws macie2 create-classification-job --job-type "ONE_TIME" --name "maice-poc" --s3-job-definition bucketDefinitions=[{"accountID"="254378651398", "buckets"=["maice-poc"]}]
but it is giving me an error:-
Unknown options: buckets=[maice-poc]}]
Can someone give me a correct command?
The s3-job-definition requires a structure as value.
And in your case, you want to pass in a JSON-formatted structure parameter, so you should wrap the JSON starting with bucketDefinitions in single quotes. Also instead of = use the JSON syntax : for key-value pairs.
The following API call should work:
aws macie2 create-classification-job --job-type "ONE_TIME" --name "macie-poc" --s3-job-definition '{"bucketDefinitions":[{"accountId":"254378651398", "buckets":["maice-poc"]}]}'

Using conditionals in AWS SES templates doesn't work (MissingRenderingAttributeException)

I am trying to use conditionals in SES templates by following this guideline:
https://docs.aws.amazon.com/ses/latest/dg/send-personalized-email-advanced.html
I should be able to create a template with dynamic content based on the value of the variable evaluation result. Still, no matter what I do, I keep getting a 'MissingRenderingAttributeException' error.
For local development I use localstack on docker.
The aws-localstack is an alias I set for communicating with the AWS CLI localstack provides
This is the test I'm running:
aws-localstack ses create-template --cli-input-json '{
"Template": {
"TemplateName": "test_conditionals",
"SubjectPart": "TESTING CONDS",
"TextPart": "{{#if lastName}}[{{lastName}}]{{/if}}",
"HtmlPart": "{{#if lastName}}[{{lastName}}]{{/if}}"
}
}'
aws-localstack ses test-render-template --cli-input-json '{
"TemplateName": "test_conditionals",
"TemplateData": "{\"lastName\":\"test-result\"}"
}'
I keep getting this error:
An error occurred (MissingRenderingAttributeException) when calling the TestRenderTemplate operation: Attribute '#if lastName' is not present in the rendering data.
How do I fix it? What am I missing?
Any suggestion would be appreciated :)
I found the problem, it has nothing to do with AWS, but with localstack
There are many features missing in localstack library, one of them is conditionals support
I thought about deleting this topic, but in favor of everyone who expects it to behave the same on local env by using localstack, I thought it is worth having such topic in SO, because I had to work "harder" to figure it out

Is there any way to specify --endpoint-url in aws cli config file

The aws command is
aws s3 ls --endpoint-url http://s3.amazonaws.com
can I load endpoint-url from any config file instead of passing it as a parameter?
This is an open bug in the AWS CLI. There's a link there to a cli plugin which might do what you need.
It's worth pointing out that if you're just connecting to standard Amazon cloud services (like S3) you don't need to specify --endpoint-url at all. But I assume you're trying to connect to some other private service and that url in your example was just, well, an example...
alias aws='aws --endpoint-url http://website'
Updated Answer
Here is an alternative alias to address the OP's specific need and comments above
alias aws='aws $([ -r "$SOME_CONFIG_FILE" ] && sed "s,^,--endpoint-url ," $SOME_CONFIG_FILE) '
The SOME_CONFIG_FILE environment variable could point to a aws-endpoint-override file containing
http://localhost:4566
Original Answer
Thought I'd share an alternative version of the alias
alias aws='aws ${AWS_ENDPOINT_OVERRIDE:+--endpoint-url $AWS_ENDPOINT_OVERRIDE} '
This idea I replicated from another alias I use for Terraform
alias terraform='terraform ${TF_DIR:+-chdir=$TF_DIR} '
I happen to use direnv with an /Users/darren/Workspaces/current-client/.envrc containing
source_up
PATH_add bin
export AWS_PROFILE=saml
export AWS_REGION=eu-west-1
export TF_DIR=/Users/darren/Workspaces/current-client/infrastructure-project
...
A possible workflow for AWS-endpoint overriding could entail cd'ing into a docker-env directory, where /Users/darren/Workspaces/current-client/app-project/docker-env/.envrc contains
source_up
...
export AWS_ENDPOINT_OVERRIDE=http://localhost:4566
where LocalStack is running in Docker, exposed on port 4566.
You may not be using Docker or LocalStack, etc, so ultimately you will have to provide the AWS_ENDPOINT_OVERRIDE environment variable via a mechanism and with an appropriate value to suit your use-case.

How to get ec2 instance details with price details using aws cli

How to get ec2 instance details(like name,id,type,region,volume,platform,ondemand/reserved) with instance price details .
Using aws api in cli and write it as a csv file .
Thanks in advance .
similar to my answer here: get ec2 pricing programmatically?
you can do something similar to the following:
aws pricing get-products --service-code AmazonEC2 --filters "Type=TERM_MATCH,Field=instanceType,Value=m5.xlarge" "Type=TERM_MATCH,Field=location,Value=US East (N. Virginia)" --region us-east-1 | jq -rc '.PriceList[]' | jq -r '[ .product.attributes.servicecode, .product.attributes.location, .product.attributes.instancesku?, .product.attributes.instanceType, .product.attributes.usagetype, .product.attributes.operatingSystem, .product.attributes.memory, .product.attributes.physicalProcessor, .product.attributes.processorArchitecture, .product.attributes.vcpu, .product.attributes.currentGeneration, .terms.OnDemand[].priceDimensions[].unit, .terms.OnDemand[].priceDimensions[].pricePerUnit.USD, .terms.OnDemand[].priceDimensions[].description] | #csv'
I recommand you to use ansible with the ec2-inventory to do so.
Ansible will be able to take all thoses informations using request like:
Then you can have the platform like this for example :
ansible -i ec2.py -m debug -a "var=ec2_platform" all
You'll have to create a script in yaml to take the informations you need and write them in a csv file.
I don't know any easy way to get the exact price of the servers for amazon-ec2, there is a lot argument to take in account, the OS, the disk space, server type, is it reserved or not, etc ...
But I did a good approximation using what I told you above.
Here is the explanation for dynamic inventory with ansible and ec2:
http://docs.ansible.com/ansible/intro_dynamic_inventory.html
Hope it helped !
If your aim is not to automate the prizing of your servers, you can have a one shot from this URL:
https://aws.amazon.com/fr/ec2/pricing/on-demand/
You'll need to know :
server type (ex: m3.large)
Reservation type (reserved or on demand)
OS type (linux, windows, RHEL, ...)
the hour coverage (it depends if you shutdown your server or not during the night or else ...)
Then you'll have a good approximation of the prize.
If you want to have more details, you'll have to have a look at your network and data activity. And this is not that easy to calculate...
Another approach would be to go in your pricing menu, and look at your facturation to know what you paid for the past month. But this won't work if you want to estimate the prize of a new server.
Hope it helped.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.