AWS-CLI - New Commands - amazon-web-services

I want to extend commands for aws-cli it is possible, I mean aws-cli has some library to extend some functionality, it hasn't? For example, I want to obtain the size in MB of my files in the bucketmio, May I could create some extension for that, e.g
aws s3 bucketmio get-size-all-files
get-size-all-files is the new extension that I want.
Thanks.

Yes, you can Create and use AWS CLI aliases, which are "shortcuts you can create in the AWS Command Line Interface (AWS CLI) to shorten commands or scripts that you frequently use. You create aliases in the alias file located in your configuration folder."
Here is an example that sends a message via Amazon SNS:
textalert =
!f() {
aws sns publish --message "${1}" --phone-number ${2}
}; f
You can write shell script that calls the AWS CLI but also adds additional logic around those calls.

Related

Storing AWS Lambda Function code directly in S3 Bucket

AWS Lambda Functions have an option to enter the code uploaded as a file from S3. I have a successfully running lambda function with the code taken as a zip file from an S3 Bucket, however, any time you would like to update this code you would need to either manually edit the code inline within the lambda function or upload a new zip file to S3 and go into the lambda function and manually re-upload the file from S3. Is there any way to get the lambda function to link to a file in S3 so that it will automatically update its function code when you update the code file (or zip file) contained in S3?
Lambda doesn't actually reference the S3 code when it runs--just when it sets up the function. It is like it takes a copy of the code in your bucket and then runs the copy. So while there isn't a direct way to get the lambda function to automatically run the latest code in your bucket, you can make a small script to update the function code using SDK methods. I don't know which language you might want to use, but for example, you could write a script to call the AWS CLI to update the function code. See https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't
modify the code of a published version, only the unpublished version.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
Synopsis
update-function-code
--function-name [--zip-file ] [--s3-bucket ] [--s3-key ] [--s3-object-version ] [--publish |
--no-publish] [--dry-run | --no-dry-run] [--revision-id ] [--cli-input-json ] [--generate-cli-skeleton ]
You could do similar things using Python or PowerShell as well, such as using
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code
You can set up an AWS Code deploy pipeline to get your code build and deployed on code commit in your code repository(github,bitbucket,etc)
CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances, serverless
Lambda functions, or Amazon ECS services.
Also, wanted to add if you want to go on a more unattended route of deploying your Updated code to the Lambda use this flow in your code Pipeline
Source -> Code Build (npm installs and zipping etc.) -> S3 Upload (sourcecode.zip in S3 bucket) -> Code Build (another build just for aws lambda update-funtion-code)
Make sure the role for the last stage has both S3 getObject and Lambda UpdateFunctionCode policies attached to it.

Is there a simple way to clone a glue job, but change the database connections?

I have a large number of clients who supply data in the same format, and need them loading into identical tables in different databases. I have set up a job for them in Glue, but now I have to do the same thing another 20 times
Is there any way I can take an existing job and copy it, but with changes to the S3 filepath and the JDBC connection?
I haven't been able to find much online regarding scripting in AWS Glue. Would this be achievable through the AWS command line interface?
The quickest way would be to use the aws cli.
aws glue get-job --job-name <value>
where value is the specific job that you are trying to replicate. You can then alter the s3 path and JDBC connection info in the JSON that the above command returns. Also, you'll need to give it a new unique name. Once you've done that, you can pass that in to:
aws glue create-job --cli-input-json <value>
where value is the updated JSON that you are trying to create a new job from.
See AWS command line reference for more info on the glue command line
use the command
aws glue create-job --generate-cli-skeleton
to generate the skeleton JSON
Use the below command to get the existing job's definition
aws glue get-job --job-name <value>
Copy the values from the output of existing job's definition into skeleton
Remove the newline character and pass it as input to below command
aws glue create-job --cli-input-json <framed_JSON>
Here is the complete reference for Create Job AWS CLI documentation
https://docs.aws.amazon.com/cli/latest/reference/glue/create-job.html
PS: don't change the order of the elements in JSON (generated in skeleton), only update the connection and name
--cli-input-json (string) Performs service operation based on the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally.
--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.
Thanks to the great answers here, you already know that the AWS CLI comes to the rescue.
Tip: if you don't want to install or update the AWS CLI, just use the AWS CloudShell!
I've tested the commands here using version:
$ aws --version
aws-cli/1.19.14 Python/3.8.5 Linux/5.4.0-65-generic botocore/1.20.14
If you want to create a new job from scratch, you'll want a template first, which you can get with:
aws glue create-job --generate-cli-skeleton > job_template.json
Then use your favourite editor (I like vim) to fill out the details in job_template.json (or whatever you call it).
But if DuckDuckGo or other engine sent you here, there's probably an existing job that you would like to clone and tweak. We'll call it "perfect_job" in this guide.
Let's get a list of all the jobs, just to check we're in the right place.
aws glue list-jobs --region us-east-1
The output shows us two jobs:
{
"JobNames": [
"perfect_job",
"sunshine"
]
}
View our job:
aws glue get-job --job-name perfect_job --region us-east-1
The JSON output looks right, let's put it in a file so we can edit it:
aws glue get-job --job-name perfect_job --region us-east-1 > perfect_job.json
Let's cp that to a new file, say  super_perfect_job.json. Now you can edit it to change the fields as desired. The first thing of course is to change the Name!
Two things to note:
Remove the outer level of the JSON, we need the value of Job not the Job identifier itself. If you look at job_template.json created above, you'll see that it must start with Name, so it's a small edit to match the format requirement.
There's no CreatedOn or LastModifiedOn in job_template.json either, so let's delete those lines too. Don't worry, if you forget to delete them, the creation will fail with a helpful message like 'Parameter validation failed: Unknown parameter in input: "LastModifiedOn"'.
Now we're ready to create the job! The following example will add Glue job "super_perfect_job" in the Cape Town region:
aws glue create-job --cli-input-json file://super_perfect_job.json --region af-south-1
But that didn't work:
An error occurred (InvalidInputException) when calling the CreateJob
operation: Please set only Allocated Capacity or Max Capacity.
I delete MaxCapacity and try again. Still not happy:
An error occurred (InvalidInputException) when calling the CreateJob
operation: Please do not set Allocated Capacity if using Worker Type
and Number of Workers.
Fine. I delete AllocatedCapacity and have another go. This time the output is:
{
    "Name": "super_perfect_job"
}
Which means, success! You can confirm by running list-jobs again. It's even more rewarding to open the AWS Console and see it pop up in the web UI.
We can't wait to run this job, so we'll use the CLI as well, and we'll pass three additional parameters: --fruit, --vegetable and --nut which our script expects. But -- would confuse the AWS CLI so let's store these in a file called args.json containing:
{
  "--fruit": "tomato",
  "--vegetable": "cucumber",
  "--nut": "almond"
}
And call our job like so:
aws glue start-job-run --job-name super_perfect_job --arguments file://args.json --region af-south-1
Or like this:
aws glue start-job-run --job-name super_perfect_job --arguments '{"--fruit": "tomato","--vegetable": "cucumber"}'
And you can view the status of job runs with:
aws glue get-job-runs --job-name super_perfect_job --region us-east-1
As you can see, the AWS Glue API accessed by the AWS CLI is pretty powerful, being not only convenient, but allowing automation in Continuous Integration (CI) servers like Jenkins, for example. Run aws glue help for more commands and quick help or see the online documentation for more details.
For creating or managing permanent infrastructure, it's preferable to use Infrastructure as Code tools, such as CloudFormation or Terraform.

AWS Cloud Formation templates

Is there any way to use a simple JSON file (of my instance details) to configure a Cloud Formation template?
That's basically what a CloudFormation template provides you. Since it is a template, you can also pass in parameters as variables.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
Details on passing parameters from a config file:
https://aws.amazon.com/blogs/devops/passing-parameters-to-cloudformation-stacks-with-the-aws-cli-and-powershell/
You can have CFT parameters populated however you like. If you want to run / load the CFT from AWS console - add the parameters as either default or options within the CFT and choose them while creating the stack.
If you want to load them from a properties file - you can use any programming language of your choice to do so. A bash script that loads the properties or whatever, it's upto you and your use case. If you are using AWS cli to run the CFT use bash shell or power shell, if you are using AWS SDK to run your CFT - use the same language as your SDK etc.
If you are using just aws cli, you can do something like this with a json parameters file:
aws cloudformation create-stack --stackname startmyinstance
--template-body file:///some/local/path/templates/startmyinstance.json
--parameters file:///some/local/path/params/startmyinstance-parameters.json

AWS how to use a credential just for one request

Is it possible to provide the credential in each request in a way like
aws sns create-topic my_topic --ACCESS-KEY XXXX --SECRET-KEY XXXX
Instead of doing aws configure before I make the call.
I know that credential management can be done by using --profile like Using multiple profiles but that requires me to save the credential, which I cannot do. I'm depending on the user to provide me the key as parameter input. Is it possible?
I believe the closest option to what you are looking for would be to set the credentials as environment variables before invoking the AWS CLI.
One option is to export the environment variables that control the credentials and then call the desired CLI. The following works for me in bash:
$ export AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=YhTYxxxxxxxxxxxxxxVCSi; aws sns create-topic my_topic
You may also want to take a look at: Configuration Settings and Precedence
There is another way. Instead of "export"ing, just run the command like:
AWS_ACCESS_KEY_ID=AAAA AWS_SECRET_ACCESS_KEY=BBB aws ec2 describe-regions
This will ensure that the credentials are set only for the command.
Your best bit would be to use IAM Role for Amazon ec2 instance. That way you don't need to worry about the credentials at all. Also they keys will be rotated periodically.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

How to mention the region for a lambda function in AWS using cli

I am trying to create a lambda function in a particular region using aws-cli. I am not sure how to create it. Looking at this doc and couldn't find any parameter related to region. http://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html
Thank you.
The region is a common option to all AWS CLI commands. If you want to explicitly include the region in your command, simply include --region us-east-1, for example, to run your command in the us-east-1 region.
If this parameter is not specified explicitly, it will be implicitly derived from your configuration. This could be environment variables, your CLI's config file, or even inherited from an IAM instance profile.
A safe command to verify this is aws lambda list-functions. This is a read-only command that lists your functions; it will only list functions in the region that was implicitly supplied via your configuation. You can explicitly supply a region to this function and observe that the results will change if you have functions in one region but not the other.
Further Reading
AWS Documentation - Configuring the AWS Command Line Interface
AWS Documentation - Configuration and Credential Files
AWS Documentation - AWS CLI Options