I am trying to build on demand AWS jmeter(can be any testing tool like SOAP UI, Selenium ) instance to using Jenkins. Not looking for Server client Jmeter distribution architecture.
This is to provide cost effective solution to the spawn on demand jmeter(Not containerization )instance using Jenkins. New instance need JNLP or jenkins agent to establish connectivity with Jenkins Master.
Can some one provide me any documentation and codes(CLI) to spin up aws instance with or without AMI ?
You can use AWS CLI to manage instances (create, launch, shut down, terminate, etc.)
Example command would be:
aws ec2 run-instances --image-id your_image_id --count how_many_instances_you_want --instance-type desired_EC2_instance_type --key-name your_key_pair --security-groups your_EC2_security_group_name
Make sure that the security group allows the following ports:
the port you define as server_port, by default 1099
the port you define as server.rmi.localport
the port(s) you define as client.rmi.localport
More information:
Remote hosts and RMI configuration
Apache JMeter Properties Customization Guide
Am not sure if your are looking for this kind of setup.
Use terraform, infra as code. You will be able to spawn all the resources that are required for your test. The steps will follow like this,
Create a jmeter Docker image
Push it to ECR
Create a Cluster in ECS
Create a Task definition
Create a service in ECS cluster where it uses the Jmeter image and you can use fargate serverless.
On all the above you can use Jenkins CI/CD where you can trigger you terraform code.
Related
Background:
We have several legacy applications that are running in AWS EC2 instances while we develop a new suite of applications. Our company updates their approved AMI's on a monthly basis, and requires all running instances to run the new AMI's. This forces us to regularly tear down the instances and rebuild them with the new AMI's. In order to comply with these requirements all infrastructure and application deployment must be fully automated.
Approach:
To achieve automation, I'm using Terraform to build the infrastructure and Ansible to deploy the applications. Terraform will create EC2 Instances, Security Groups, SSH Keys, Load Balancers, Route 53 records, and an Inventory file to be used by Ansible which includes the IP addresses of the created Instances. Ansible will then deploy the legacy applications to the hosts supplied by the Inventory file. I have a shell script to execute the first the Terrafrom script and then the Ansible playbooks.
Question:
To achieve full automation I need to run this process whenever an AMI is updated. The current AMI release is stored in Parameter store and Terraform can detect when there is a change, but I still need to manually trigger the job. We also have an AWS SNS topic to which I can subscribe to receive notification of new AMI releases. My initial thought was to simply put the Terraform/Ansible scripts on an EC2 instance and have a Cron job run them monthly. This would likely work, but I wonder if it is the best approach. For starters, I would need to use an EC2 instance which itself would need to be updated with new AMI's, so unless I have another process to do this I would need to do it manually. Second, although our AMI's could potentially be updated monthly, sometimes they are not. Therefore, I would sometimes be running the jobs unnecessarily. Of course I could simply somehow detect if the the AMI ID has changed and run the job accordingly, but it seems like a better approach would be to react to the AWS SNS topic.
Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance? And how can I trigger the scripts in response to the SNS topic?
options i was testing to trigger ansible playbook in response to webhooks from alertmanager to have some form of self healing ( might be useful for you)
run ansible in aws lambda and have it frontend with api gaetway as webhook .. alertmanager trigger -> https://medium.com/#jacoelho/ansible-in-aws-lambda-980bb8b5791b
SNS receiver in AWS -> susbscriber-> AWS system manager - which supports ansible :
https://aws.amazon.com/blogs/mt/keeping-ansible-effortless-with-aws-systems-manager/
Alertmanager target jenkins webhook -> jenkins pipline uses ansible plugin to execute playbooks :
https://medium.com/appgambit/ansible-playbook-with-jenkins-pipeline-2846d4442a31
frontend ansible server with a webhook server which execute ansible commands as post actions
this can be flask based webserver or this git webhook provided below :
https://rubyfaby.medium.com/auto-remediation-with-prometheus-alert-manager-and-ansible-e4d7bdbb6abf
https://github.com/adnanh/webhook
you can also use AWX ( ansible tower in opensource form) which expose ansible server as a api endpoint ( webhook) - currently only webhooks supported - github and gitlab.
Has anyone been able to configure selenoid on aws ecs ? I am able to run the selenoid-ui container but the selenoid hub image keeps throwing an error regarding the browsers.json however I have not been able to find a way to add the browsers.json file because it stops before it executes the CMD command
There is no point to run selenoid on AWS ECS, as your setup won't scale (your browser containers will be launched on the same EC2 instance where your selenoid container is running). With ECS, you run your service on a cluster, so either your cluster contains on 1 EC2 instance, or you waste your compute resources.
If you don't need scaling, I'd suggest you run selenoid on simple EC2 instance with docker installed. If you do want to have scaling, then I suggest you to take a look at a commercial version of selenoid (called Moon), which you can run on AWS EKS.
In spring boot logs by default go to stdout. that's nice standard - less config, no directory configuration etc. but I want to build a docker image and run it on aws.
how can i get all the logs from dockerized spring-boot stdout? does cloudwatch support it? is there a simple solution or do i have to switch to logging to a file, doing docker volumes mount etc?
It depends how your architecture looks like and what do you want to do with logs.
Nowadays you can use a myriad of tools in order to read logs. You can use AWS Cloudwatch Logs and through this you can configure alertings through CloudWatch itself.
In order to use it, you can configure your slf4j backend.
<appender name="cloud-watch" class="io.github.dibog.AwsLogAppender">
<awsConfig>
<credentials>
<accessKeyId></accessKeyId>
<secretAccessKey></secretAccessKey>
</credentials>
<region></region>
<clientConfig class="com.amazonaws.ClientConfiguration">
<proxyHost></proxyHost>
<proxyPort></proxyPort>
</clientConfig>
</awsConfig>
<createLogGroup>false</createLogGroup>
<queueLength>100</queueLength>
<groupName>group-name</groupName>
<streamName>stream-name</streamName>
<dateFormat>yyyyMMdd_HHmm</dateFormat>
<layout>
<pattern>[%thread] %-5level %logger{35} - %msg %n</pattern>
</layout>
Obviously it depends from your architecture: if you have for example filebeat, you can configure filebeat to use cloudwatch.
If you use ecs-optimized AMI for the ec2 instances (it should be at least 1.9.0), you can also use the aws logdriver for your containers:
1. Before launch the ecs agent, you must change /etc/ecs/ecs.config and adjust ECS_AVAILABLE_LOGGING_DRIVERS with: ["json-file","awslogs"]
2. Activate the auto-configuration feature to create log group for ecs tasks (you can also create the groups manually, but I think you want here more automation)
For more informations about aws logdriver, you can look on aws documentation:
AWS Logs Driver
Install ECS Agent
I've been working on a cloud formation template for my environment. I end up with a
VPC
Subnet x2
Autoscaling group
Launch configuration (EC2 instances on AWS Linux AMI)
Application load balancer
Codedeploy (for deployments)
But I incurred problem with CodeDeploy configuration with Cloud Formation, as not all features are possible for EC2 instances. After configuring manually CodeDeploy, I get an error while deploying such as "too few unhealthy instances" after which created instances are not destroyed even if rollback is enabled. I'm using right now only one EC2 instance for application, but planning in future to scale.
Is there an alternative for CodeDeploy? I'm interested to trigger deploy from Jenkins Machine.
For above your requirements, I strongly suggest that using aws elastic beanstalk is better way to deploy codes to aws. Because we could manage those in elastic beanstalk and for code deployment, use codeship is also better way to mange deployment integrated with github instead of aws code deployment.
Ensure that you have assigned the correct IAM role for the EC2 instance by going to the "Instance Settings". This will ensure that your deployment occurs smoothly without throwing that error.
You can also configure the deployment to EC2 using CodeDeploy through jenkins.
Steps to follow:
AWS CodeDeploy:
Create a new CodeDeploy application.
Enter a suitable application name and choose "EC2/On premises" as the compute pleatform.
Add a deployment group under the application. For eg: "test".
Choose in-place deployment.
Add service role as "Codedeploy development".
This will allow codedeploy to interact with other AWS services.
Choose a suitable deployment configuration preferably : "OneAtATime"
if deploying to a single EC2 instance.
Environment configuration :
Choose the EC2 instance in which you want to deploy the application
Jenkins:
On Jenkins, create a job with a suitable application name.
In the "Post Build Action" section, click on "Add Post Build Action"
Jenkins - post build configuration
Choose : "Deploy an application to AWS CodeDeploy"
Enter the CodeDeploy and S3 details in the section
S3 bucket will contain all the builds which is used to deploy onto EC2 using Codedeploy
I've faced with the problem while using AWS SDK. Currently I am using SDK for golang, but solutions from other languages are welcome too!
I have ECS cluster created via SDK
Now I need to add EC2 containers for this cluster. My problem is that I can't use Amazon ECS Agent to specify cluster name via config:
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
or something like that. I can use only SDK.
I found method called RegisterContainerInstance.
But it has note:
This action is only used by the Amazon ECS agent, and it is not
intended for use outside of the agent.
It doesn't look like working solution.
I need to understand how (if it's possible) to create working ECS clusterusing SDK only.
UPDATE:
My main target is that I need to start specified count of servers from my Docker image.
While I am investigating this task i've found that I need:
create ECS cluster
assign to it needed count of ec2 instances.
create Task with my Docker image.
run it on cluster manually or as service.
So I:
Created new cluster via CreateCluster method with name "test-cluster".
Created new task via RegisterTaskDefinition
Created new EC2 instance with ecsInstanceRole role with ecs-optimized AMI type, that is correct for my region.
And there place where problems had started.
Actual result: All new ec2 instances had attached to "default" cluster (AWS created it and attach instance to it).
If I am using ECS agent I can specify cluster name by using ECS_CLUSTER config env. But I am developing tool that use only SDK (without any ability of using ECS agent).
With RegisterTaskDefinition I haven't any possibility to specify cluster, so my question, how I can assign new EC2 instance exactly to specified cluster?
When I had tried to just start my task via RunTask method (with hoping that AWS somehow create instances for me or something like that) I receive an error:
InvalidParameterException: No Container Instances were found in your cluster.
I actually can't sort out which question you are asking. Do you need to add containers to the cluster, or add instances to the cluster? Those are very different.
Add instances to the cluster
This is not done with the ECS API, it is done with the EC2 API by creating EC2 instances with the correct ecsInstanceRole. See the Launching an Amazon ECS Container Instance documentation for more information.
Add containers to the cluster
This is done be defining a task definition, then running those tasks manually or as services. See the Amazon ECS Task Definitions for more information.