How to create AWS Elastic Beanstalk environment using java sdk? - amazon-web-services

Can anyone help me with or provide any sources for creating the Aws Elastic beanstalk Environment using java program and depoly our application in it?
Thank you in advance.

You can download the AWS Java SDK here. It is also in the maven repository:
Maven:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.9.7</version>
</dependency>
Gradle:
'com.amazonaws:aws-java-sdk:1.9.7'
Now, onto using the sdk. You might want to read up on getting started with the aws sdk.
Here is some very watered down code to get you started:
import com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient;
import com.amazonaws.services.elasticbeanstalk.model.*;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
public class AwsTest {
public static void main(String[] args) {
AWSElasticBeanstalkClient eb = new AWSElasticBeanstalkClient();
// Create Application
CreateApplicationRequest request = new CreateApplicationRequest("myAppName");
eb.createApplication(request);
// Create Environment
CreateEnvironmentRequest envRequest = new CreateEnvironmentRequest("myAppName", "env-name");
envRequest.setSolutionStackName("64bit Amazon Linux 2014.09 v1.0.9 running Tomcat 7 Java 7");
envRequest.setVersionLabel("application Version");
eb.createEnvironment(envRequest);
// Deploy code
CreateStorageLocationResult location = eb.createStorageLocation();
String bucket = location.getS3Bucket();
File file = new File("myapp.zip");
PutObjectRequest object = new PutObjectRequest(bucket, "myapp.zip", file);
new AmazonS3Client().putObject(object);
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("myversion");
versionRequest.setApplicationName("myAppName");
S3Location s3 = new S3Location(bucket, "myapp.zip");
versionRequest.setSourceBundle(s3);
UpdateEnvironmentRequest updateRequest = new UpdateEnvironmentRequest();
updateRequest.setVersionLabel("myversion");
eb.updateEnvironment(updateRequest);
}
}

There is a small piece of code missing in the above given code under this section,
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("myversion");
versionRequest.setApplicationName("myAppName");
S3Location s3 = new S3Location(bucket, "myapp.zip");
versionRequest.setSourceBundle(s3);
You need to add eb.createApplicationVersion(versionRequest); in order to create new version with your own source files. Only then you can deploy the new version to the running instance of the environment.

A convenient method for deploying an AWS Elastic Beanstalk environment is to use the AWS Toolkit for Eclipse.
It allows you to write and test your code locally, then create an Elastic Beanstalk environment and deploy your code to the environment.
The Elastic Beanstalk management console can also be used to deploy a Java environment with a Sample Application, which you can then override with your own code.
See also:
Deploying an Application Using AWS Elastic Beanstalk
AWS Elastic Beanstalk Documentation

Here is the updated AWS SDK for Java V2 to create an Environment for Elastic Beanstalk
Region region = Region.US_WEST_2;
ElasticBeanstalkClient beanstalkClient = ElasticBeanstalkClient.builder()
.region(region)
.build();
ConfigurationOptionSetting setting1 = ConfigurationOptionSetting.builder()
.namespace("aws:autoscaling:launchconfiguration")
.optionName("IamInstanceProfile")
.resourceName("aws-elasticbeanstalk-ec2-role")
.build();
CreateEnvironmentRequest applicationRequest = CreateEnvironmentRequest.builder()
.description("An AWS Elastic Beanstalk environment created using the AWS Java API")
.environmentName("MyEnviron8")
.solutionStackName("64bit Amazon Linux 2 v3.2.12 running Corretto 11")
.applicationName("TestApp")
.cnamePrefix("CNAMEPrefix")
.optionSettings(setting1)
.build();
CreateEnvironmentResponse response = beanstalkClient.createEnvironment(applicationRequest);
To learn how to get up and running with the AWS SDK for Java V2, see https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html.

Related

How to run AWS CLI command from .NET Core

Is it possible to run aws cli command from .NET Core app? I need automatically sync content of folder with S3. I use AWS SDK for other setup, but AWS SDK not contains s3 sync method.
I tried create .NET Core Console and create .bat file with (for the test only check an version of aws)
aws --version
PAUSE
And start from .NET
string pathToRun = #"C:\Users\Adam\source\repos\StaticWeb\StaticWeb\run.bat";
Process p = new Process();
p.StartInfo.FileName = pathToRun;
// Run the process and wait for it to complete
p.Start();
p.WaitForExit();
Error
aws --version
'aws' is not recognized as an internal or external command,
operable program or batch file.
If I start manual run.bat, it works properly.
I have installed AWS CLI 32 bit and 64 bit on my computer.
I've found the solution, so I have replaced aws keywork with path to aws cli. I don't need .bat anymore.
string command = $"/C start \"\" \"C:/Program Files/Amazon/AWSCLIV2/aws.exe\" --version";
ProcessStartInfo info = new ProcessStartInfo();
info.FileName = "cmd.exe";
info.Arguments = command;
info.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
var process = new Process();
process.StartInfo = info;
process.Start();

Launch and install applications on EC2 instance using aws sdk

I don't know If this will be possible at the first place.
The requirement is to launch ec2 instance using aws sdk (I know this is possible) using based on some application login.
Then I wanted to install some application on the newly launched instance let's say docker.
Is this possible using sdk? Or My idea itself is wrong and there is a better solution to the scenario?
Can i run a command on a Running instance using SDK?
Yes you can install anything on EC2 when it is launched by providing script/commands in userdata section. This is also possible from AWS SDK https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_UserData.html
you can pass command like yum install docker in userdata
UserData='yum install docker'
installing applications uysing cmd on a running command is possible using boto3.
ssm_client = boto3.client('ssm')
response = ssm_client.send_command(
InstanceIds=['i-xxxxxxx'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': ['echo "abc" > 1.txt']}, )
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-xxxxxxxx',
)
print(output)
ssm client Runs commands on one or more managed instances.
You can also optimize this using function
def execute_commands(ssm_client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", #preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
ssm_client = boto3.client('ssm')
commands = ['ifconfig']
instance_ids = ['i-xxxxxxxx']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)

Using Gradle plugin to push docker images to ECR

I am using gradle-docker-plugin to build and push docker images to Amazon's ECR. To do this I am also using a remote docker daemon running on an EC2 instance. I have configured a custom task EcrLoginTask to fetch the ECR authorization token using aws-java-sdk-ecr library. Relevant code looks like : -
class EcrLoginTask extends DefaultTask {
String accessKey
String secretCode
String region
String registryId
#TaskAction
String getPassword() {
AmazonECR ecrClient = AmazonECRClient.builder()
.withRegion(Regions.fromName(region))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretCode))).build()
GetAuthorizationTokenResult authorizationToken = ecrClient.getAuthorizationToken(
new GetAuthorizationTokenRequest().withRegistryIds(registryId))
String token = authorizationToken.getAuthorizationData().get(0).getAuthorizationToken()
System.setProperty("DOCKER_PASS", token) // Will this work ?
return token
}
}
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.amazonaws:aws-java-sdk-ecr:1.11.244'
classpath 'com.bmuschko:gradle-docker-plugin:3.2.1'
}
}
docker {
url = "tcp://remote-docker-host:2375"
registryCredentials {
username = 'AWS'
password = System.getProperty("DOCKER_PASS") // Need to provide at runtime !!!
url = 'https://123456789123.dkr.ecr.eu-west-1.amazonaws.com'
}
}
task getECRPassword(type: EcrLoginTask) {
accessKey AWS_KEY
secretCode AWS_SECRET
region AWS_REGION
registryId '139539380579'
}
task dbuild(type: DockerBuildImage) {
dependsOn build
inputDir = file(".")
tag "139539380579.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
task dpush(type: DockerPushImage) {
dependsOn dbuild, getECRPassword
imageName "123456789123.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
The remote docker connection works fine, ECR token is also fetched successfully and the dbuild task also gets executed successfully.
PROBLEM
The dpush task fails - "Could not push image: no basic auth credentials"
I believe this is because the authorization token received using the EcrLoginTask was not passed on to in the docker configuration closure password property.
How do I fix it ? I need to provide the credentials on the fly each time the build is executed.
Have a look at the 'gradle-aws-ecr-plugin'. It's able to get a fresh (latest) Amazon ECR docker registry token, during every AWS/Docker command call:
All Docker tasks such as DockerPullImage, DockerPushImage, etc. that
are configured with the ECR registry URL will get a temporary ECR
token. No further configuration is necessary. It is possible to set
the registry URL for individual tasks.
This should work well alongside either the gradle-docker-plugin or Netflix's nebula-docker-plugin, which is also based on, and extends, the 'bmuschko' docker plugin.
The 'gradle-aws-ecr-plugin' BitBucket homepage explains concisely how to configure both the AWS and ECR [URL] credentials.

Deploy new war to AWS Elastic Beanstalk environment through JAVA API

I created an Elastic Beanstalk Application and Environment for that application through AWS Java API. Now the environment is running with sample application. I just need to deploy new war file to the environment through Java API. How to do it? Here is my code for creating Environment,
CreateEnvironmentRequest createEnvironmentRequest = new CreateEnvironmentRequest()
.withApplicationName("MySampleApplicationThree").withEnvironmentName("MySampleApplicationThree-env")
.withCNAMEPrefix("MySampleApplicationThree")
.withSolutionStackName("64bit Amazon Linux 2017.03 v2.6.1 running Tomcat 8 Java 8")
.withVersionLabel("Sample Application")
;
CreateEnvironmentResult envresult = service.beansTalk().createEnvironment(createEnvironmentRequest);
First you upload the war file to S3 bucket and then create new Beanstalk Application version by referring the S3 bucket location having new war file. After that upload and deploy the new application version to the instance. You can do this using Java SDK as below,
// Create Environment
CreateEnvironmentRequest envRequest = new CreateEnvironmentRequest("SampleApplication", "SampleApplication-env2");
envRequest.setSolutionStackName("64bit Amazon Linux 2017.03 v2.6.1 running Tomcat 8 Java 8");
envRequest.setVersionLabel("SampleApplication");
service.beansTalk().createEnvironment(envRequest);
// Deploy code
//Create S3 storage location and upload new file into it
CreateStorageLocationResult location = service.beansTalk().createStorageLocation();
String bucket = location.getS3Bucket();
File file = new File("FirstServlet.war");
PutObjectRequest object = new PutObjectRequest(bucket, "FirstServlet.war", file);
PutObjectResult res = service.s3().putObject(object);
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("First Servlet");
versionRequest.setApplicationName("SampleApplication");
S3Location s3 = new S3Location(bucket, "FirstServlet.war");
versionRequest.setSourceBundle(s3);
CreateApplicationVersionResult resu = service.beansTalk().createApplicationVersion(versionRequest);
UpdateEnvironmentRequest updateRequest = new UpdateEnvironmentRequest();
updateRequest.setEnvironmentId("xxx");
updateRequest.setVersionLabel("First Servlet");
UpdateEnvironmentResult result = service.beansTalk().updateEnvironment(updateRequest);

Best way to deploy play2 app using Amazon Beanstalk

I found fragmented instructions here and some other places about deploying Play2 app on amazon ec2. But did not find any neat way to deploy using Beanstalk.
Play is a nice framework and AWS beanstalk is one of the most popular services then why is there no official instruction to do this?
Has anyone found any better solution?
Deploying a Play2 app on elastic beanstalk is now easy with Docker Containers in combination with sbt's experimental docker feature.
In build.sbt specify the exposed docker ports:
dockerExposedPorts in Docker := Seq(9000)
You should automate the following steps, but you can try this out manually to test that it works:
Generate a Dockerfile for the project by running the command: sbt docker:stage.
Go to the ./target/docker/ directory.
Create an elastic beanstalk Dockerrun.aws.json file with the following contents:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Zip up everything in that directory, let's say into a file called play2-test-docker.zip. The zip file should contain the files: Dockerfile, Dockerrun.aws.json, and files/* directory.
Go to aws beanstalk console and create a new application using the m3.medium or any instance type with enough memory for the jvm to run. Any instance with too little memory will result in a JVM error.
Select "Docker Container" in the Predefined Configuration dropdown.
In the application selection screen, select "Upload" and select the zip file you created earlier. Launch the app and then go brew some tea. This can take a very long time. Minutes. Subsequent deployments of the same app version should be slightly quicker.
Once the app is running and green in the aws console, click on the app's url and you should see the welcome screen of the application (or whatever your index file is).
Here's my solution that doesn't require any additional services/containers like Docker or Jenkins.
Create a dist folder in the root of your Play application's directory. Create a Procfile file containing the following contents and put it in the dist folder (EB requires port 5000):
web: ./bin/YOUR_APP_FILE_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
The YOUR_APP_FILE_NAME is the name of the executable in the bin directory, which is inside the .zip created by activator dist.
After running activator dist, you can just upload the created zip file into Elastic Beanstalk and it will automatically deploy the app. You also put whatever .ebextension folders and configuration files into the dist folder that you require for Elastic Beanstalk configuration. Ex. I have dist/.ebextensions/nginx/conf.d/proxy.conf for NGINX reverse proxy settings or dist/.ebextensions/env.config for environment variables.
Edit 2016: There's now a much better way to deploy your Playframework apps onto ElasticBeanstalk using the new Java SE containers.
Here's an article that walks you through deploying step by step using Jenkins to build and deploy your project:
https://www.davemaple.com/articles/deploy-playframework-elastic-beanstalk-jenkins/
You can use custom AMIs that I keep updated here:
https://github.com/davemaple/playframework-nginx-elastic-beanstalk
These run Nginx + Playframework and support standard zip files created using "activator dist".
We also saw this as being too much of a pain and have added native Play 2 support to Boxfuse to address this.
You can now simply do boxfuse run my-play-app-1.0.zip -env=prod and this will automatically:
create a minimal AMI tailor-made for your Play 2 app
create an elastic IP
create a security group with the correct permissions
launch an instance of your app
All future updates are performed as blue/green deployments with zero downtime.
This also works with Elastic Load Balancers and Auto-Scaling Groups and the Boxfuse free tier is designed to fit the AWS free tier.
You can read more about it here: https://boxfuse.com/blog/playframework-aws
Disclaimer: I'm the founder and CEO of Boxfuse
I had some problems with other solutions found here and there. I guess that the problem is that I'm developing on Play 2.4.
Anyway, I could deploy the app to Beanstalk using Typesafe Activator and Docker:
In build.sbt I added this lines:
import com.typesafe.sbt.packager.docker.{ExecCmd, Cmd}
// [...]
dockerCommands := Seq(
Cmd("FROM","java:openjdk-8-jre"),
Cmd("MAINTAINER","myname"),
Cmd("EXPOSE","9000"),
Cmd("ADD","stage /"),
Cmd("WORKDIR","/opt/docker"),
Cmd("RUN","[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("RUN","[\"chmod\", \"+x\", \"bin/myapp\"]"),
Cmd("USER","daemon"),
Cmd("ENTRYPOINT","[\"bin/myapp\", \"-J-Xms128m\", \"-J-Xmx512m\", \"-J-server\"]"),
ExecCmd("CMD")
)
I went to the project's directory and ran this command in the terminal
$ ./activator clean docker:stage
I opened the [project]/target/dockerdirectory and created the file Dockerrun.aws.json. This was its content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
In the same target/docker directory, I tested the result, built, checked and ran the image:
$ docker build -t myapp .
$ docker images
$ docker run -p 9000:9000 myapp
As everything was ok, I zipped the content:
$ zip -r myapp.zip *
My zip file had Dockerfile, Dockerrun.aws.json and stage/* files
Finally, I created a new Beanstalk app and uploaded the zip created on the last step. I took care of select "Generic Docker" on "Predefined configuration", when I was creating the app.
Beanstalk only supports WAR deployment and Play doesn't officially support WAR deployment. If you want to use EC2 then you should instead just create an EC2 instance and follow the deployment instructions: http://www.playframework.com/documentation/2.2.x/ProductionDist
Deploying play 2.* apps in aws ec2 is very diffrent until you have found this much better way to do it. I mean ansible is promising a great solution to that. though it is still needed to work with new setup of ansible, and its playbook but that must be worthy.
I have found these reads very recently and yet to apply them in my project. I hope following reads will help you to learn more:
Ansible + play + aws ec2
Read it to know more about Ansible to deply play in aws
Thanks!
Hope this will help you to kick your start. Please do share more knowledge you gain during the procedure or if there is any simple way to solve this complicated deployment problem.