I am looking for a way to trigger re-build of the Elastic Beanstalk back-end every day at night. Right now I am doing it manually through the AWS console.
How can a Lambda (?) be set up to do the same automatically?
You would create a Lambda function, in whatever language you choose, that uses the AWS SDK for that language. The Lambda function would call the Elastic Beanstalk API to trigger an environment rebuild.
For example if you wrote the Lambda function in Python, you would use the AWS SDK for Python (aka Boto3) and call rebuild_environment() method on the ElasticBeanstalk client.
You would create an IAM role for the Lambda function, and assign the appropriate permissions to that IAM role to allow it to rebuild your ElasticBeanstalk environment.
Finally, you would schedule the Lambda function to run every night via a Cron expression.
Related
Here's what I've done:
I setup an Aurora Serverless MySql instance.
Created a security group for Cloud9 which allows me to access Aurora Serverless mysql.
Created a Lambda function which queries my db, I added the custom VPC and added the security group which lets me access Aurora Serverless (same one as Cloud9), the Lambda function works fine and can query my DB.
Before setting up Aurora Serverless, I had a RDS MySql instance which my Lambda functions could query, and had a little deploy script on my local machine which I ran to package my Lambda function's changes and uploaded them to their respective Lambda function. To setup CLI I just used this guide https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html.
Now that I have the security group in my Lambda function, AWS CLI doesn't let me upload the Lambda function, it gets stuck during the upload process. Note: I can still upload Lamda functions using the Lambda GUI in the AWS console.
Does anyone know what I can do to upload my Lambda functions using CLI again?
Here's a picture of where it gets stuck
I have created a Multiservices Spring/Python project. What's the easiest way to deploy it on the AWS cloud with 4 machines?
You can use multiple Services to achieve the same :
ElasticBeanstalk: If you have you code then you upload it on ElasticBeanstalk and any newer version just upload it on the Beanstalk and choose the deployment method it will automatically be deployed on the machine. You can choose the whatever number of instances you want to spin along with LoadBalancer and more.
Documentation here
CodePipeline: Have your code pushed into CodeCommit or Github or S3 and let it use CodeCommit, CodeBuild and CodeDeploy to deploy it on your EC2 server.
Documentation here
CloudFormation: This service you can use to spin up your services just through code. It is called Infrastructure as Code. Write code and spin up the instances.
Documentation here
I have use case in amazon cloud, i'm using fargate cluster and cloudformation.
I want to do continuous deployment i.e on new image upload trigger i want to update the cloudformation stack with this new image, also run this automated deployment when client wants using manual trigger.
What should i use for continuous deployment, aws code deploy or aws lambda.
aws CodeDeploy has a provider CloudFormation with limited option and less control i believe.
aws lambda has a great control over CloudFormation client through its boto api.
I also read somewhere that when you get some limitations in CodeDeploy or CodePipeline you can integrate lambda to get rid of this limitation. So why not use lambda in the first place for continuous deployment only.
I'm very convinced about aws lambda over aws CodeDeploy after doing some research, However, i'm open for comments and suggestions.
You can use both of them to achieve perfect CI-CD implementation
If image gets uploaded the Lambda will be triggered and Lambda will be having your configurations and parameters
Using that it will call CodeDeploy to build your ECR images and It will get deployed to your Farget cluster
You can also achieve your second need using this implementation, manual trigger when client wants
In lambda you can trigger manualy passing parameters runtime
I hope this helps you
I am trying to deploy Lambda functions using AWS Cloud9. When I press deploy, all of my functions are deployed/synced at the same time rather than just the one I selected when deploying. Same thing when right clicking on the function and pressing deploy. I find this quite annoying and wondering if there is any work around?
When you click deploy Cloud9 runs aws cloudformation package and aws cloudformation deploy on your template.yaml file in the background. (source: I developed the Lambda integration for AWS Cloud9).
Because all your files are bundled into one serverless application and there is only one CloudFormation stack they can only be all deployed at once with CloudFormation.
If you're only making a code change for one function and are not modifying any configuration settings you could update that one function from the command line with the command:
zip -r - . | aws lambda update-function-code --function-name <function-name>`
Run this in the same folder as your template.yaml file, replacing <function-name> with it's full generated name like cloud9-myapp-myfunction-ABCD1234 (You can see the full name under the remote functions list in the AWS Resources panel).
In AWS Cloud9, Lambda functions are created within serverless applications and are therefore deployed via CloudFormation. With CloudFormation, the whole stack is deployed at once, so all functions are deployed together (see this discussion for more info).
Now, i'm creating AWS AMI manually from an EC2 instance. and i would like to automate the process using Jenkins build process.
I've configured the jenkins-cloudformation plugin with the credentials and tried to trigger the cloud formation template to launch the EC2 instance. From here how can i proceed the automation process to create the AMI with in the cloud formation template?
Can someone help me on this?
This is an old question but here is some info for anyone trying to do such automation. You might use HashiCorp Packer for creating the image but, if you know your way around lambdas and AWS API, you do not need packer.
You can create a new AMI by launching an instance from a source AMI, customizing it the way you want, and then call AWS api to make an AMI out of the instance. Here are steps you might follow for this:
first, you need to find a source image. You can specify aws ec2 describe_images filters to do this.
once you have the image, you need launch an instance from it. Here is boto3 api to make the call.
while launching the instance, you will want to pass 'UserData' to it. You user data may be a few simple lines of installing packages or do advanced stuff. You can put it all into a script, host it in s3, and make UserData download and execute your script.
Once you are done with your work on the instance, it is time to capture it as a new AMI.
So, how would you do these and where is the glue?
You can use AWS lambda to manage these steps. One lambda can find the source AMI and launch and instance from it. Another lambda can capture the image.
Once your instance is customized, you would trigger the lambda that will capture it as an AMI. You might do that by directly invoking lambda. Depending on re-usability requirements you have, you might want to trigger that lambda from SNS or CloudWatch, in that case you would send an SNS message to your SNS topic or enable/trigger your CloudWatch rule.
You cloudformation would install these lambdas and any other components that would trigger them (SNS and CloudWatch).