I could not find any examples of this online, nor could I find the documentation explaining how to do this. Basically I have a list of Windows EC2 instances and I need to run the quser command in each one of them to check how many users are logged on.
It is possible to do this using the AWS Systems Manager service and running the AWS-RunPowerShellScript command. I only found examples using the AWS CLI, something like this:
aws ssm send-command --instance-ids "instance ID" --document-name "AWS-RunPowerShellScript" --comment "Get Users" --parameters commands=quser --output text
But how can I accomplish this using the AWS Java SDK 1.11.x ?
#Alexandre Krabbe it's more than a year before you asked this question. So not sure the answer will help you. But I was trying to do the same recently and that led me to this unanswered question. I ended up solving the problem and thought my answer could help other people facing the same problem. Here is a code snippet for the same:
public void runCommand() throws InterruptedException {
//Command to be run
String ssmCommand = "ls -l";
Map<String, List<String>> params = new HashMap<String, List<String>>(){{
put("commands", new ArrayList<String>(){{ add(ssmCommand); }});
}};
int timeoutInSecs = 5;
//You can add multiple command ids separated by commas
Target target = new Target().withKey("InstanceIds").withValues("instance-id");
//Create ssm client.
//The builder could be chosen as per your preferred way of authentication
//use withRegion for specifying your region
AWSSimpleSystemsManagement ssm = AWSSimpleSystemsManagementClientBuilder.standard().build();
//Build a send command request
SendCommandRequest commandRequest = new SendCommandRequest()
.withTargets(target)
.withDocumentName("AWS-RunShellScript")
.withParameters(params);
//The result has commandId which is used to track the execution further
SendCommandResult commandResult = ssm.sendCommand(commandRequest);
String commandId = commandResult.getCommand().getCommandId();
//Loop until the invocation ends
String status;
do {
ListCommandInvocationsRequest request = new ListCommandInvocationsRequest()
.withCommandId(commandId)
.withDetails(true);
//You get one invocation per ec2 instance that you added to target
//For just a single instance use get(0) else loop over the instanced
CommandInvocation invocation = ssm.listCommandInvocations(request).getCommandInvocations().get(0);
status = invocation.getStatus();
if(status.equals("Success")) {
//command output holds the output of running the command
//eg. list of directories in case of ls
String commandOutput = invocation.getCommandPlugins().get(0).getOutput();
//Process the output
}
//Wait for a few seconds before you check the invocation status again
try {
TimeUnit.SECONDS.sleep(timeoutInSecs);
} catch (InterruptedException e) {
//Handle not being able to sleep
}
} while(status.equals("Pending") || status.equals("InProgress"));
if(!status.equals("Success")) {
//Command ended up in a failure
}
}
In SDK 1.11.x , use sth like:
val waiter = ssmClient.waiters().commandExecuted()
waiter.run(WaiterParameters(GetCommandInvocationRequest()
.withCommandId(commandId)
.withInstanceId(instanceId)
))
Related
Using AWS .NET SDK, I tried to put event with EventBridge and then track it with CloudWatch.
How I put event:
using (var eventClient = new AmazonEventBridgeClient(credentials, RegionEndpoint.USEast1))
{
PutEventsResponse result = await eventClient.PutEventsAsync( new PutEventsRequest
{
Entries = new List<PutEventsRequestEntry>
{
new PutEventsRequestEntry
{
DetailType = "TestEvent",
EventBusName = "default",
Source = "mySource",
Detail = JsonConvert.SerializeObject(new TestClass{ Message = "myMessage"}),
Time = DateTime.UtcNow
}
}
});
}
And what I see in logs
Can somebody explain me, why I don't see the Detail and DetailType I defined? Maybe I do something wrong?
Thank you in advance
Okey, I finally found solution. All, what I need, was to configure input to my rule target
Here I chose Part of the matched event and defined what I want to receive. But, this variant isn't available for CloudWatch Logs Target, so this answer isn't complete at all.
I start the gcloud sdk docker
docker run -ti --rm --expose=8085 -p 8085:8085 google/cloud-sdk:latest
then i run:
gcloud beta emulators pubsub start --project=my-project --host-port=0.0.0.0:8085
then stop the sever and then:
gcloud beta emulators pubsub env-init
gives:
export PUBSUB_EMULATOR_HOST=0.0.0.0:8085
but there is no project id. How can I setup project for tests? How can i create topics and subscriptions?
version:
gcloud version
gives:
Google Cloud SDK 236.0.0
...
pubsub-emulator 2019.02.22
You are launching pubsub emulator with project my-project in your 2nd command. Once this is running, don't kill it, leave it running.
To create the topics and subscriptions, you have to use one of the SDKs. I created a demo project that does this using the Java SDK: https://github.com/nhartner/pubsub-emulator-demo/
The relevant code is this:
#Component
public class TestPubSubConfig {
private final TransportChannelProvider channelProvider;
private final CredentialsProvider credentialsProvider;
private String projectId;
private String topicName = "test-topic";
private String subscriptionName = "test-subscription";
TestPubSubConfig(#Autowired #Value("${spring.cloud.gcp.pubsub.emulator-host}") String emulatorHost,
#Autowired #Value("${spring.cloud.gcp.project-id}") String projectId) throws IOException {
this.projectId = projectId;
ManagedChannel channel = ManagedChannelBuilder.forTarget(emulatorHost).usePlaintext().build();
channelProvider = FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));
credentialsProvider = NoCredentialsProvider.create();
createTopic(topicName);
createSubscription(topicName, subscriptionName);
}
#Bean
public Publisher testPublisher() throws IOException {
return Publisher.newBuilder(ProjectTopicName.of(projectId, topicName))
.setChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider)
.build();
}
private void createSubscription(String topicName, String subscriptionName) throws IOException {
ProjectTopicName topic = ProjectTopicName.of(projectId, topicName);
ProjectSubscriptionName subscription = ProjectSubscriptionName.of(projectId, subscriptionName);
try {
subscriptionAdminClient()
.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 100);
}
catch (AlreadyExistsException e) {
// this is fine, already created
}
}
private void createTopic(String topicName) throws IOException {
ProjectTopicName topic = ProjectTopicName.of(projectId, topicName);
try {
topicAdminClient().createTopic(topic);
}
catch (AlreadyExistsException e) {
// this is fine, already created
}
}
private TopicAdminClient topicAdminClient() throws IOException {
return TopicAdminClient.create(
TopicAdminSettings.newBuilder()
.setTransportChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider).build());
}
private SubscriptionAdminClient subscriptionAdminClient() throws IOException {
return SubscriptionAdminClient.create(SubscriptionAdminSettings.newBuilder()
.setTransportChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider)
.build());
}
}
A possible gotchya we uncovered while working with the Pub/Sub emulator is that the documentation says:
In this case, the project ID can be any valid string; it does not
need to represent a real GCP project because the Cloud Pub/Sub
emulator runs locally.
any valid string in this context is not any string, but specifically a valid one, meaning it looks like a valid GC Project Id. In our testing this was specifically strings that match the REGEX pattern:
/^[a-z]-[a-z]-\d{6}$/
Once supplied with a valid project ID, the emulator works as advertised. If you have a sandbox project in GC you can use that ID or you can make up your own that matches that pattern. Once you got that far you can follow the remainder of the Testing apps locally with the emulator documentation.
I am trying to access my AWS DataPipelines using AWS Java SDK v1.7.5, but listPipelines is returning an empty list in the code below.
I have DataPipelines that are scheduled in the US East region, which I believe I should be able to list using the listPipelines method of the DataPipelineClient. I am already using the ProfilesConfigFile to authenticate and connect to S3, DynamoDB and Kinesis without a problem. I've granted the PowerUserAccess Access Policy to the IAM user specified in the config file. I've also tried applying the Administrator Access policy to the user, but it didn't change anything. Here's the code I'm using:
//Establish credentials for connecting to AWS.
File configFile = new File(System.getProperty("user.home"), ".aws/config");
ProfilesConfigFile profilesConfigFile = new ProfilesConfigFile(configFile);
AWSCredentialsProvider awsCredentialsProvider = new ProfileCredentialsProvider(profilesConfigFile, "default");
//Set up the AWS DataPipeline connection.
DataPipelineClient dataPipelineClient = new DataPipelineClient(awsCredentialsProvider);
Region usEast1 = Region.getRegion(Regions.US_EAST_1);
dataPipelineClient.setRegion(usEast1);
//List all pipelines we have access to.
ListPipelinesResult listPipelinesResult = dataPipelineClient.listPipelines(); //empty list returned here.
for (PipelineIdName p: listPipelinesResult.getPipelineIdList()) {
System.out.println(p.getId());
}
Make sure to check if there are more results - I've noticed sometimes the API returns only few pipelines (could even be empty), but has a flag for more results. You can retrieve them like this:
void listPipelines(DataPipelineClient dataPipelineClient, String marker) {
ListPipelinesRequest request = new ListPipelinesRequest();
if (marker != null) {
request.setMarker(marker);
}
ListPipelinesResult listPipelinesResult = client.listPipelines(request);
for (PipelineIdName p: listPipelinesResult.getPipelineIdList()) {
System.out.println(p.getId());
}
// Call recursively if there are more results:
if (pipelineList.getHasMoreResults()) {
listPipelines(dataPipelineClient, listPipelinesResult.getMarker());
}
}
I am new to AWS sdk java. I am trying to write a code through which I want to control the instance and to get all AWS EC2 information.
I am able to start an instance and also stop it. But as you all must be aware that it takes some time to start an instance, so I want to wait there (don't want to use Thread.sleep) till it's up or when I'm stopping an instance it should wait there till I proceed to the next step.
Here's the code:
AmazonEC2 ec2 = = new AmazonEC2Client(credentialsProvider);
DescribeInstancesResult describeInstancesRequest = ec2.describeInstances();
List<Reservation> reservations = describeInstancesRequest.getReservations();
Set<Instance> instances = new HashSet<Instance>();
for (Reservation reservation : reservations) {
instances.addAll(reservation.getInstances());
}
for (Instance instance : instances) {
if ((instance.getInstanceId().equals("myimage"))) {
List<String> instancesToStart = new ArrayList<String>();
instancesToStart.add(instance.getInstanceId());
StartInstancesRequest startr = new StartInstancesRequest();
startr.setInstanceIds(instancesToStart);
ec2.startInstances(startr);
Thread.currentThread().sleep(60*1000);
}
if ((instat.getName()).equals("running")) {
List<String> instancesToStop = new ArrayList<String>();
instancesToStop.add(instance.getInstanceId());
StopInstancesRequest stoptr = new StopInstancesRequest();
stoptr.setInstanceIds(instancesToStop);
ec2.stopInstances(stoptr);
}
Also, I'd like to say that whenever I try to get the list of images it hangs in the below code.
DescribeImagesResult describeImagesResult = ec2.describeImages();
You can get an instance of the class "Instance" every time you want to see the updated status with the same "instance Id".
Instance instance = new Instance(<your instance id that you got previously from describe instances>);
To get the updated status with something like this:
InstanceStatus instat = instance.getStatus();
I think the key here is saving the "instance id" of the instance that you care about.
boto in Python has an nice method instance.update() that can be called on an instance and you can see its status but I can't find it in Java.
Hope this helps.
Is it possible to request "Snapshot Logs" through AWS SDK somehow?
It's possible to do it through AWS console:
Cross posted to Amazon forum.
Requesting a log snapshot is a 3 step process. First you have to do an environment information request:
elasticBeanstalk.requestEnvironmentInfo(
new RequestEnvironmentInfoRequest()
.withEnvironmentName(envionmentName)
.withInfoType("tail"));
Then you have to retreive the environment information:
final List<EnvironmentInfoDescription> envInfos =
elasticBeanstalk.retrieveEnvironmentInfo(
new RetrieveEnvironmentInfoRequest()
.withEnvironmentName(environmentName)
.withInfoType("tail")).getEnvironmentInfo();
This returns a list of environment info descriptions, with the EC2 instance id and the URL to an S3 object that contains the log snapshot. You can then retreive the logs with:
DefaultHttpClient client = new DefaultHttpClient();
DefaultHttpRequestRetryHandler retryhandler =
new DefaultHttpRequestRetryHandler(3, true);
client.setHttpRequestRetryHandler(retryhandler);
for (EnvironmentInfoDescription environmentInfoDescription : envInfos) {
System.out.println(environmentInfoDescription.getEc2InstanceId());
HttpGet rq = new HttpGet(environmentInfoDescription.getMessage());
try {
HttpResponse response = client.execute(rq);
InputStream content = response.getEntity().getContent();
System.out.println(IOUtils.toString(content));
} catch ( Exception e ) {
System.out.println("Exception fetching " +
environmentInfoDescription.getMessage());
}
}
I hope this helps!