Is there any way to specify --endpoint-url in aws cli config file - amazon-web-services

The aws command is
aws s3 ls --endpoint-url http://s3.amazonaws.com
can I load endpoint-url from any config file instead of passing it as a parameter?

This is an open bug in the AWS CLI. There's a link there to a cli plugin which might do what you need.
It's worth pointing out that if you're just connecting to standard Amazon cloud services (like S3) you don't need to specify --endpoint-url at all. But I assume you're trying to connect to some other private service and that url in your example was just, well, an example...

alias aws='aws --endpoint-url http://website'

Updated Answer
Here is an alternative alias to address the OP's specific need and comments above
alias aws='aws $([ -r "$SOME_CONFIG_FILE" ] && sed "s,^,--endpoint-url ," $SOME_CONFIG_FILE) '
The SOME_CONFIG_FILE environment variable could point to a aws-endpoint-override file containing
http://localhost:4566
Original Answer
Thought I'd share an alternative version of the alias
alias aws='aws ${AWS_ENDPOINT_OVERRIDE:+--endpoint-url $AWS_ENDPOINT_OVERRIDE} '
This idea I replicated from another alias I use for Terraform
alias terraform='terraform ${TF_DIR:+-chdir=$TF_DIR} '
I happen to use direnv with an /Users/darren/Workspaces/current-client/.envrc containing
source_up
PATH_add bin
export AWS_PROFILE=saml
export AWS_REGION=eu-west-1
export TF_DIR=/Users/darren/Workspaces/current-client/infrastructure-project
...
A possible workflow for AWS-endpoint overriding could entail cd'ing into a docker-env directory, where /Users/darren/Workspaces/current-client/app-project/docker-env/.envrc contains
source_up
...
export AWS_ENDPOINT_OVERRIDE=http://localhost:4566
where LocalStack is running in Docker, exposed on port 4566.
You may not be using Docker or LocalStack, etc, so ultimately you will have to provide the AWS_ENDPOINT_OVERRIDE environment variable via a mechanism and with an appropriate value to suit your use-case.

Related

In an AWS lambda, how do I access the image_id or tag of the launched container from within it?

I have an AWS lambda built using SAM. I want to propagate the id (or, if it's easier, the tag) of a lambda's supporting docker image through to the lambda runtime function.
How do I do this?
Note: I do mean image id and NOT container id - what you'd see if you called docker image ls locally. Getting the container id / hostname is the easy bit :D
I have tried to declare a parameter in the template.yaml and have it picked up as an environment variable that way. I would prefer to define the value at most once within the template.yaml, and preferably have it auto-populated, though I am not aware of best practice there. The aim is to avoid human error. I don't want to pass the value on the command line unless I have to.
If it's too hard to get the image id then as a fallback the DockerTag would be fine. Again, I don't want this in multiple places in the template.yaml. Thanks!
Unanswered similar question: Finding the image ID of a container from within the container
The launched image URI is available in the packaged template file after running sam package, so it's possible to extract the tag from there.
For example, if using YAML:
grep -w ImageUri packaged.yaml | cut -d: -f3
This will find the URI in the packaged template (which looks like ImageUri: 12345.dkr.ecr.us-east-1.amazonaws.com/myrepo:mylambda-123abc-latest) and grabs the tag, which is after the 2nd :.
That said, I don't think it's a great solution. I wish there was a way using the SAM CLI.

What is the aws-cli command for AWS Macie to create a job?

actually I want to create a job in AWS macie using the aws cli.
I ran the following command:-
aws macie2 create-classification-job --job-type "ONE_TIME" --name "maice-poc" --s3-job-definition bucketDefinitions=[{"accountID"="254378651398", "buckets"=["maice-poc"]}]
but it is giving me an error:-
Unknown options: buckets=[maice-poc]}]
Can someone give me a correct command?
The s3-job-definition requires a structure as value.
And in your case, you want to pass in a JSON-formatted structure parameter, so you should wrap the JSON starting with bucketDefinitions in single quotes. Also instead of = use the JSON syntax : for key-value pairs.
The following API call should work:
aws macie2 create-classification-job --job-type "ONE_TIME" --name "macie-poc" --s3-job-definition '{"bucketDefinitions":[{"accountId":"254378651398", "buckets":["maice-poc"]}]}'

How can I provision IIS on EC2 Windows with a resource?

I have just started working on a project that is hosted on an AWS EC2 Windows Instance with an IIS. I want to move this setup to more reliable place, and one of the first things I wanted to do was to move away from snowflake servers that are setup and configured by hand.
So started looking at Terraform from Hashicorp. My thought was that I could define the entire setup including network etc in Terraform and that way make sure it was configured correctly.
I thought I would start with defining a server. A simple Windows Server instance with an IIS installed. But this is where I run into my first problems. I thought I could configure the IIS from Terraform. I guess you can't. So my next thought was to combine Terraform with Powershell Desired State Configuration.
I can setup an IIS server on a box using DSC. But I am stuck invoking DSC from Terraform. I can provision a vanilla server easily. I have tried looking for a good blog post on how to use DSC in combination with Terraform, but I can't find one that explains how to do it.
Can anyone point me towards a good place to read up on this? Or alternatively if the reason I can't find this is that it is just bad practice and I should do it in another way, then please educate me.
Thanks
How can I provision IIS on EC2 Windows with a resource?
You can run arbitrary PowerShell scripts on startup as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = <<-EOF
<powershell>
$file = $env:SystemRoot + "\Temp\${var.some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
EOF
//...
}
You'll need a variable like this defined to use that (I'm providing a more complex example so there's a more useful starting point)
variable "some_variable" {
type = string
default = "UserDataTestFile"
}
Instead of creating a timestamp file like the example above, you can invoke DSC to set up IIS as you normally would interactively from PowerShell on a server.
You can read more about user_data on Windows here:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html
user_data will include your PowerShell directly.
You can use a templatefile("${module.path}/user-data.ps1, {some_variable = var.some_variable}) instead of an inline script as above.
Have user-data.ps1 in the same directory as the TF file that references it:
<powershell>
$file = $env:SystemRoot + "\Temp\${some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
You still need the <powershell></powershell> tags around your script source code. That's a requirement of how Windows on EC2 expects PowerShell user-data scripts.
And then update your TF file as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = templatefile("${module.path}/user-data.ps1, {
some_variable = var.some_variable
})
//...
}
Note that in the file read by templatefile has variables like some_variable and NOT var.some_variable.
Read more about templatefile here:
https://www.terraform.io/docs/configuration/functions/templatefile.html

AWS CLI - A file containing items to be ignored for S3 copy or sync

Is it somewhat possible to have a file containing ignored files and folders during uploading items through AWS CLI.
It has an --exclude flag like mentioned here. However, the concept I seek is something like .gitignore or .dockerignore file rather than enlisting with a flag.
No, there is no in-built capability within the AWS Command-Line Interface (CLI) to support .ignore file capabilities.
I know it's not exactly what you are looking for but you could set an alias in your ~/.bash_profile something like:
alias s3_cp=`aws s3 cp --exclude "yadda, yadda, yadda"`
This would at least reduce the need to type them every time, even though it isn't in a concise file.
Edit: Here is a link that shows it doesn't look like the base config file supports what you are looking for. https://docs.aws.amazon.com/cli/latest/topic/s3-config.html

Specify the AWS credentials in hadoop

I want to specify the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID at run-time.
I already tried using
hadoop -Dfs.s3a.access.key=${AWS_ACESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} fs -ls s3a://my_bucket/
and
export HADOOP_CLIENT_OPTS="-Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY}"
and
export HADOOP_OPTS="-Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY}"
In the last two examples, I tried to run with:
hadoop fs -ls s3a://my-bucket/
In all the cases I got:
-ls: Fatal internal error
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
What am doing wrong?
This is a correct way to pass the credentials at runtime,
hadoop fs -Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} -ls s3a://my_bucket/
Your syntax needs a small fix. Make sure that empty strings are not passed as the values to these properties. It would make these runtime properties invalid and would go on searching for the credentials as per the authentication chain.
The S3A client follows the following authentication chain:
If login details were provided in the filesystem URI, a warning is
printed and then the username and password extracted for the AWS key
and secret respectively.
The fs.s3a.access.key and fs.s3a.secret.key are looked for in the
Hadoop XML configuration.
The AWS environment variables are then looked for.
An attempt is made to query the Amazon EC2 Instance Metadata Service
to retrieve credentials published to EC2 VMs.
The other possible methods to pass the credentials at runtime (please note that it is neither safe nor recommended to supply them during runtime),
1) Embed them in the S3 URI
hdfs dfs -ls s3a://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY#my-bucket/
If the secret key contains any + or / symbols, escape them with %2B and %2F respectively.
Never share the URL, logs generated using it, or use such an inline authentication mechanism in production.
2) export environment variables for the session
export AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
hdfs dfs -ls s3a://my-bucket/
I think part of the problem is that, confusingly, unlike the JVM -D opts, the Hadoop -D command expects a space between the -D and the key, e.g:
hadoop fs -ls -D fs.s3a.access.key=AAIIED s3a://landsat-pds/
I would still avoid doing that on the command line though, as anyone who can do a ps command can see your secrets.
Generally we stick them into core-site.xml when running outside EC2; in EC2 it's handled magically