How to poll in AWS SDK Java? - amazon-web-services

I am new to AWS sdk java. I am trying to write a code through which I want to control the instance and to get all AWS EC2 information.
I am able to start an instance and also stop it. But as you all must be aware that it takes some time to start an instance, so I want to wait there (don't want to use Thread.sleep) till it's up or when I'm stopping an instance it should wait there till I proceed to the next step.
Here's the code:
AmazonEC2 ec2 = = new AmazonEC2Client(credentialsProvider);
DescribeInstancesResult describeInstancesRequest = ec2.describeInstances();
List<Reservation> reservations = describeInstancesRequest.getReservations();
Set<Instance> instances = new HashSet<Instance>();
for (Reservation reservation : reservations) {
instances.addAll(reservation.getInstances());
}
for (Instance instance : instances) {
if ((instance.getInstanceId().equals("myimage"))) {
List<String> instancesToStart = new ArrayList<String>();
instancesToStart.add(instance.getInstanceId());
StartInstancesRequest startr = new StartInstancesRequest();
startr.setInstanceIds(instancesToStart);
ec2.startInstances(startr);
Thread.currentThread().sleep(60*1000);
}
if ((instat.getName()).equals("running")) {
List<String> instancesToStop = new ArrayList<String>();
instancesToStop.add(instance.getInstanceId());
StopInstancesRequest stoptr = new StopInstancesRequest();
stoptr.setInstanceIds(instancesToStop);
ec2.stopInstances(stoptr);
}
Also, I'd like to say that whenever I try to get the list of images it hangs in the below code.
DescribeImagesResult describeImagesResult = ec2.describeImages();

You can get an instance of the class "Instance" every time you want to see the updated status with the same "instance Id".
Instance instance = new Instance(<your instance id that you got previously from describe instances>);
To get the updated status with something like this:
InstanceStatus instat = instance.getStatus();
I think the key here is saving the "instance id" of the instance that you care about.
boto in Python has an nice method instance.update() that can be called on an instance and you can see its status but I can't find it in Java.
Hope this helps.

Related

how to catch instance_id that is of type class after creating a new server with boto3 with python

I am launching a new ec2 instance with this code:
ec2 = boto3.resource('ec2',
aws_access_key_id=existing_user.access_id,
aws_secret_access_key=existing_user.secret_id,
region_name='eu-west-2')
instance = ec2.create_instances(
ImageId="ami-084e8c05825742534",
MinCount=1,
MaxCount=1,
InstanceType="t2.micro",
KeyName="KeyPair1",
SecurityGroupIds=[
'sg-0f6e6789ff4e7e7c1',
],
)
print('successfully lauched an instance save it to User db')
print(instance[0])
print(type(instance[0]))
the instance variable returns an instance id of the new ec2 instance
which i am printing which output something like this:
ec2.Instance(id='i-03ee6121b4e7846d2')
<class 'boto3.resources.factory.ec2.Instance'>
I am new to python classes and stuff and not able to access/extract the id which i need to save
to my DB.
Can anybody help with this?
The ec2 class has an attribute which you can print like this or save to DB :
print(f'EC2 instance "{ec2.id}"')

GCP API - Determining what role an resource instance has been created with

For the project I'm on, I am tasked with creating a testing app that uses Terraform to create a resource instance and then test that it was created properly. The purpose is testing the Terraform Script result by validating certain characteristics of the resource created. That's the broad outline.
For several of these scripts a resource is assigned a role. It could be a PubSub subscription, DataCatalog, etc.
Example Terraform code for a Spanner Database assigning roles/spanner.databaseAdmin:
resource "google_spanner_database_iam_member" "user_database_admin" {
for_each = toset(var.iam_user_database_admins)
project = var.project
instance = var.instance_id
database = google_spanner_database.spanner_database.name
role = "roles/spanner.databaseAdmin"
member = "user:${each.key}"
}
So my question is this: Is there a way using a .NET GCP API to make a call to determine that the role was assigned? I can test for permissions via a TestIamPermissions method off of the client object and that's what I'm currently doing. But that gives me a sometimes long list of possible permissions. Is there a way to say "does this spanner database have the roles/spanner.databaseAdmin assigned?"
Here's an example of code testing for permissions on a PubSub Subscription:
TestIamPermissionsRequest subscriptionRequest = new TestIamPermissionsRequest
{
ResourceAsResourceName = SubscriptionName.FromProjectSubscription(projectId, subscriptionId),
Permissions = {
"pubsub.subscriptions.get",
"pubsub.subscriptions.delete",
"pubsub.subscriptions.update"
}
};
TestIamPermissionsResponse subscriptionResponse = publisher.TestIamPermissions(subscriptionRequest);
Seems like there ought to be a cleaner way to do this, but being somewhat new to GCP, I haven't found a way yet. Suggestions will be welcome.
Thought I should close this question off with what I eventually discovered. The proper question isn't what role is assigned an instance of a resource, but what users have been allowed to use the resource and with what role.
The proper call is GetIamPolicy which is available in the APIs for all of the resources that I've been working with. The problem was that I wasn't seeing anything due to no user accounts being assigned to the resource. I updated the Terraform script to assign a user to the resource with the required roles. When calling GetIamPolicy, it returns an array in the Bindings that lists roles and users that are assigned. This was the information I needed. Going down the path of using TestIamPermissions was unneeded.
Here's an example my use of this:
bool roleFound = false;
bool userFound = false;
bool exception = false;
try
{
Policy policyResponse = Client.GetIamPolicy(Resource);
var bindings = policyResponse.Bindings;
foreach (var item in bindings)
{
if (AcceptedRoles.Contains(item.Role))
roleFound = true;
foreach (var user in item.Members)
{
string testUser = user;
if (user.Substring(0, 5) == "user:")
{
testUser = user.Substring(5);
}
else if (user.Substring(0, 6) == "group:")
{
testUser = user.Substring(6);
}
if (Settings.UserTestList.Contains(testUser))
userFound = true;
}
}
}
catch (Grpc.Core.RpcException)
{
exception = true;
}
Assert.True(roleFound);
Assert.True(userFound);
Assert.False(exception);

AWS Java SDK - Running a command using SSM on EC2 instances

I could not find any examples of this online, nor could I find the documentation explaining how to do this. Basically I have a list of Windows EC2 instances and I need to run the quser command in each one of them to check how many users are logged on.
It is possible to do this using the AWS Systems Manager service and running the AWS-RunPowerShellScript command. I only found examples using the AWS CLI, something like this:
aws ssm send-command --instance-ids "instance ID" --document-name "AWS-RunPowerShellScript" --comment "Get Users" --parameters commands=quser --output text
But how can I accomplish this using the AWS Java SDK 1.11.x ?
#Alexandre Krabbe it's more than a year before you asked this question. So not sure the answer will help you. But I was trying to do the same recently and that led me to this unanswered question. I ended up solving the problem and thought my answer could help other people facing the same problem. Here is a code snippet for the same:
public void runCommand() throws InterruptedException {
//Command to be run
String ssmCommand = "ls -l";
Map<String, List<String>> params = new HashMap<String, List<String>>(){{
put("commands", new ArrayList<String>(){{ add(ssmCommand); }});
}};
int timeoutInSecs = 5;
//You can add multiple command ids separated by commas
Target target = new Target().withKey("InstanceIds").withValues("instance-id");
//Create ssm client.
//The builder could be chosen as per your preferred way of authentication
//use withRegion for specifying your region
AWSSimpleSystemsManagement ssm = AWSSimpleSystemsManagementClientBuilder.standard().build();
//Build a send command request
SendCommandRequest commandRequest = new SendCommandRequest()
.withTargets(target)
.withDocumentName("AWS-RunShellScript")
.withParameters(params);
//The result has commandId which is used to track the execution further
SendCommandResult commandResult = ssm.sendCommand(commandRequest);
String commandId = commandResult.getCommand().getCommandId();
//Loop until the invocation ends
String status;
do {
ListCommandInvocationsRequest request = new ListCommandInvocationsRequest()
.withCommandId(commandId)
.withDetails(true);
//You get one invocation per ec2 instance that you added to target
//For just a single instance use get(0) else loop over the instanced
CommandInvocation invocation = ssm.listCommandInvocations(request).getCommandInvocations().get(0);
status = invocation.getStatus();
if(status.equals("Success")) {
//command output holds the output of running the command
//eg. list of directories in case of ls
String commandOutput = invocation.getCommandPlugins().get(0).getOutput();
//Process the output
}
//Wait for a few seconds before you check the invocation status again
try {
TimeUnit.SECONDS.sleep(timeoutInSecs);
} catch (InterruptedException e) {
//Handle not being able to sleep
}
} while(status.equals("Pending") || status.equals("InProgress"));
if(!status.equals("Success")) {
//Command ended up in a failure
}
}
In SDK 1.11.x , use sth like:
val waiter = ssmClient.waiters().commandExecuted()
waiter.run(WaiterParameters(GetCommandInvocationRequest()
.withCommandId(commandId)
.withInstanceId(instanceId)
))

how can i get aws launch config from AMI iD

I have made script who clean ami id based on from not running instances.
but i want also delete feature this script to clean launch config form AMI ID(actually who are not exist).
good_images = set([instance.image_id for instance in ec2.instances.all()])
#LaunchConfig in use AMI
client = boto3.client('autoscaling', region_name=region)
response = client.describe_launch_configurations()
ls_list=[]
for LC in response['LaunchConfigurations']:
(LC['ImageId'])
print ls_list
but its not working.
Your code:
for LC in response['LaunchConfigurations']:
(LC['ImageId'])
should be:
for LC in response['LaunchConfigurations']:
(ls_list.append(LC['ImageId']))
used_lc = []
all_lc = []
def used_launch_config():
for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
launch_config_attached_with_asg = asg['LaunchConfigurationName']
used_lc.append(launch_config_attached_with_asg)
used_launch_config()
print used_lc
def all_spot_lc():
for launch_config in client.describe_launch_configurations(MaxRecords=100,)['LaunchConfigurations']:
lc = launch_config['LaunchConfigurationName']
if str(lc).startswith("string"):
all_lc.append(lc)
all_spot_lc()
print all_lc
I just avoid to delete launch config from AMI.
Now i compared from used or unused it solved the problem.
I was doing wrong in Previous code.
is there way to increase max records

elastic map reduce "keep alive" specification in the java api

How do I set the jobflow to "keep alive" in the java api like I do with command like like this:
elastic-mapreduce --create --alive ...
I have tried to add withKeepJobFlowAlivewhenNoSteps(true) but this still makes the jobflow shut down when a step fails (for example if I submit a bad jar)
You need to set withActionOnFailure to let the API know what to do when a step fails, and this has to be set on per step basis.
You must be having withActionOnFailure("TERMINATE_JOB_FLOW") for your StepConfigs.
Change them to withActionOnFailure("CANCEL_AND_WAIT").
Following is the full code to launch a cluster using Java API taken from here, just replacing the needful:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonElasticMapReduceClient emr = new AmazonElasticMapReduceClient(credentials);
StepFactory stepFactory = new StepFactory();
StepConfig enabledebugging = new StepConfig()
.withName("Enable debugging")
.withActionOnFailure("CANCEL_AND_WAIT") //here is the change
.withHadoopJarStep(stepFactory.newEnabledebuggingStep());
StepConfig installHive = new StepConfig()
.withName("Install Hive")
.withActionOnFailure("CANCEL_AND_WAIT") //here is the change
.withHadoopJarStep(stepFactory.newInstallHiveStep());
RunJobFlowRequest request = new RunJobFlowRequest()
.withName("Hive Interactive")
.withSteps(enabledebugging, installHive)
.withLogUri("s3://myawsbucket/")
.withInstances(new JobFlowInstancesConfig()
.withEc2KeyName("keypair")
.withHadoopVersion("0.20")
.withInstanceCount(5)
.withKeepJobFlowAliveWhenNoSteps(true)
.withMasterInstanceType("m1.small")
.withSlaveInstanceType("m1.small"));
RunJobFlowResult result = emr.runJobFlow(request);