How to run AWS CLI command from .NET Core - amazon-web-services

Is it possible to run aws cli command from .NET Core app? I need automatically sync content of folder with S3. I use AWS SDK for other setup, but AWS SDK not contains s3 sync method.
I tried create .NET Core Console and create .bat file with (for the test only check an version of aws)
aws --version
PAUSE
And start from .NET
string pathToRun = #"C:\Users\Adam\source\repos\StaticWeb\StaticWeb\run.bat";
Process p = new Process();
p.StartInfo.FileName = pathToRun;
// Run the process and wait for it to complete
p.Start();
p.WaitForExit();
Error
aws --version
'aws' is not recognized as an internal or external command,
operable program or batch file.
If I start manual run.bat, it works properly.
I have installed AWS CLI 32 bit and 64 bit on my computer.

I've found the solution, so I have replaced aws keywork with path to aws cli. I don't need .bat anymore.
string command = $"/C start \"\" \"C:/Program Files/Amazon/AWSCLIV2/aws.exe\" --version";
ProcessStartInfo info = new ProcessStartInfo();
info.FileName = "cmd.exe";
info.Arguments = command;
info.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
var process = new Process();
process.StartInfo = info;
process.Start();

Related

CDK Ec2 MultipartUserData control script order

I am using CDK to provision some EC2 instances and configure them using user-data.
My user data consists of 2 files
cloud-config
shell script.
What I have been noticing is that the shell script executes before my cloud-config finishes resulting in the script failing as all dependencies have not finished downloading.
Is there a way to control the run order? The reason I did not do all the configuration in the cloud config is I need to pass some arguments to the script and was easy using the ec2.UserData.forLinux().addExecuteFileCommand
const multipartUserData = new ec2.MultipartUserData();
multipartUserData.addUserDataPart(
this.createBootstrapConfig(),
'text/cloud-config; charset="utf8"'
);
multipartUserData.addUserDataPart(
this.runInstallationScript(),
'text/x-shellscript; charset="utf8"'
);

Launch and install applications on EC2 instance using aws sdk

I don't know If this will be possible at the first place.
The requirement is to launch ec2 instance using aws sdk (I know this is possible) using based on some application login.
Then I wanted to install some application on the newly launched instance let's say docker.
Is this possible using sdk? Or My idea itself is wrong and there is a better solution to the scenario?
Can i run a command on a Running instance using SDK?
Yes you can install anything on EC2 when it is launched by providing script/commands in userdata section. This is also possible from AWS SDK https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_UserData.html
you can pass command like yum install docker in userdata
UserData='yum install docker'
installing applications uysing cmd on a running command is possible using boto3.
ssm_client = boto3.client('ssm')
response = ssm_client.send_command(
InstanceIds=['i-xxxxxxx'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': ['echo "abc" > 1.txt']}, )
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-xxxxxxxx',
)
print(output)
ssm client Runs commands on one or more managed instances.
You can also optimize this using function
def execute_commands(ssm_client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", #preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
ssm_client = boto3.client('ssm')
commands = ['ifconfig']
instance_ids = ['i-xxxxxxxx']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)

Running updates on EC2s that roll back on failure of status check

I’m setting up a patch process for EC2 servers running a web application.
I need to build an automated process that installs system updates but, reverts back to the last working ec2 instance if the web application fails a status check.
I’ve been trying to do this using an Automation Document in EC2 Systems Manager that performs the following steps:
Stop EC2 instance
Create AMI from instance
Launch new instance from newly created AMI
Run updates
Run status check on web application
If check fails, stop new instance and restart original instance
The Automation Document runs the first 5 steps successfully, but I can't identify how to trigger step 6? Can I do this within the Automation Document? What output would I be able to call from step 5? If it uses aws:runCommand, should the runCommand trigger a new automation document or another AWS tool?
I tried the following to solve this, which more or less worked:
Included an aws:runCommand action in the automation document
This ran the DocumentName "AWS-RunShellScript" with the following parameters:
Downloaded the script from s3:
sudo aws s3 cp s3://path/to/s3/script.sh /tmp/script.sh
Set the file to executable:
chmod +x /tmp/script.sh
Executed the script using variables set in, or generated by the automation document
bash /tmp/script.sh -o {{VAR1}} -n {{VAR2}} -i {{VAR3}} -l {{VAR4}} -w {{VAR5}}
The script included the following getopts command to set the inputted variables:
while getopts o:n:i:l:w: option
do
case "${option}"
in
n) VAR1=${OPTARG};;
o) VAR2=${OPTARG};;
i) VAR3=${OPTARG};;
l) VAR4=${OPTARG};;
w) VAR5=${OPTARG};;
esac
done
The bash script used the variables to run the status check, and roll back to last working instance if it failed.

AWS EMR (4.x-5.x) classpath for custom jar step

When adding a custom jar step for an EMR cluster - how do you set the classpath to a dependent jar (required library)?
Let's say I have my jar file - myjar.jar but I need an external jar to run it - dependency.jar. Where do you configure this when creating the cluster? I am not using the command line, using the Advanced Options interface.
Thought I would post this after spending a number of hours poking around and reading outdated documentation.
The 2.x/3.x documentation that talks about setting the HADOOP_CLASSPATH does not work. They specify this does not work for 4.x and above anyway. Somewhere you need to specify a --libjars option. However, specifying that in the arguments list does not work either.
For example:
Step Name: MyCustomStep
Jar Location: s3://somebucket/myjar.jar
Arguments:
myclassname
option1
option2
--libjars dependentlib.jar
Copy your required jars to /usr/lib/hadoop-mapreduce/ in a bootstrap action. No other changes are necessary. Additional info below:
This command below works for me to copy a specific JDBC driver version:
sudo aws s3 cp s3://<your bucket>/mysql-connector-java-5.1.23-bin.jar /usr/lib/hadoop-mapreduce/
I have other dependencies so I have a bootstrap action for each jar I need copied, of course you could put all the copies in a single bash script. Below is .net code I use to get a bootstrap action to run the copy script. I am using .net SDK versions 3.3.* and launching the job with release label emr-5.2.0
public static BootstrapActionConfig CopyEmrJarDependency(string jarName)
{
return new BootstrapActionConfig()
{
Name = $"Copy jars for EMR dependency: {jarName}",
ScriptBootstrapAction = new ScriptBootstrapActionConfig()
{
Path = $"s3n://{Config.AwsS3CodeBucketName}/EMR/Scripts/copy-thirdPartyJar.sh",
Args = new List<string>()
{
$"s3://{Config.AwsS3CodeBucketName}/EMR/Java/lib/{jarName}",
"/usr/lib/hadoop-mapreduce/"
}
}
};
}
Note that the ScriptBootstrapActionConfig Path property uses the protocol "s3n://", but the protocol for the aws cp command should be "s3://"
My script copy-thirdPartyJar.sh contains the following:
#!/bin/bash
# $1 = location of jar
# $2 = attempted magic directory for java classpath
sudo aws s3 cp $1 $2

How to create AWS Elastic Beanstalk environment using java sdk?

Can anyone help me with or provide any sources for creating the Aws Elastic beanstalk Environment using java program and depoly our application in it?
Thank you in advance.
You can download the AWS Java SDK here. It is also in the maven repository:
Maven:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.9.7</version>
</dependency>
Gradle:
'com.amazonaws:aws-java-sdk:1.9.7'
Now, onto using the sdk. You might want to read up on getting started with the aws sdk.
Here is some very watered down code to get you started:
import com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient;
import com.amazonaws.services.elasticbeanstalk.model.*;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
public class AwsTest {
public static void main(String[] args) {
AWSElasticBeanstalkClient eb = new AWSElasticBeanstalkClient();
// Create Application
CreateApplicationRequest request = new CreateApplicationRequest("myAppName");
eb.createApplication(request);
// Create Environment
CreateEnvironmentRequest envRequest = new CreateEnvironmentRequest("myAppName", "env-name");
envRequest.setSolutionStackName("64bit Amazon Linux 2014.09 v1.0.9 running Tomcat 7 Java 7");
envRequest.setVersionLabel("application Version");
eb.createEnvironment(envRequest);
// Deploy code
CreateStorageLocationResult location = eb.createStorageLocation();
String bucket = location.getS3Bucket();
File file = new File("myapp.zip");
PutObjectRequest object = new PutObjectRequest(bucket, "myapp.zip", file);
new AmazonS3Client().putObject(object);
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("myversion");
versionRequest.setApplicationName("myAppName");
S3Location s3 = new S3Location(bucket, "myapp.zip");
versionRequest.setSourceBundle(s3);
UpdateEnvironmentRequest updateRequest = new UpdateEnvironmentRequest();
updateRequest.setVersionLabel("myversion");
eb.updateEnvironment(updateRequest);
}
}
There is a small piece of code missing in the above given code under this section,
CreateApplicationVersionRequest versionRequest = new CreateApplicationVersionRequest();
versionRequest.setVersionLabel("myversion");
versionRequest.setApplicationName("myAppName");
S3Location s3 = new S3Location(bucket, "myapp.zip");
versionRequest.setSourceBundle(s3);
You need to add eb.createApplicationVersion(versionRequest); in order to create new version with your own source files. Only then you can deploy the new version to the running instance of the environment.
A convenient method for deploying an AWS Elastic Beanstalk environment is to use the AWS Toolkit for Eclipse.
It allows you to write and test your code locally, then create an Elastic Beanstalk environment and deploy your code to the environment.
The Elastic Beanstalk management console can also be used to deploy a Java environment with a Sample Application, which you can then override with your own code.
See also:
Deploying an Application Using AWS Elastic Beanstalk
AWS Elastic Beanstalk Documentation
Here is the updated AWS SDK for Java V2 to create an Environment for Elastic Beanstalk
Region region = Region.US_WEST_2;
ElasticBeanstalkClient beanstalkClient = ElasticBeanstalkClient.builder()
.region(region)
.build();
ConfigurationOptionSetting setting1 = ConfigurationOptionSetting.builder()
.namespace("aws:autoscaling:launchconfiguration")
.optionName("IamInstanceProfile")
.resourceName("aws-elasticbeanstalk-ec2-role")
.build();
CreateEnvironmentRequest applicationRequest = CreateEnvironmentRequest.builder()
.description("An AWS Elastic Beanstalk environment created using the AWS Java API")
.environmentName("MyEnviron8")
.solutionStackName("64bit Amazon Linux 2 v3.2.12 running Corretto 11")
.applicationName("TestApp")
.cnamePrefix("CNAMEPrefix")
.optionSettings(setting1)
.build();
CreateEnvironmentResponse response = beanstalkClient.createEnvironment(applicationRequest);
To learn how to get up and running with the AWS SDK for Java V2, see https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html.