I am trying to set up a Jenkins slave node in AWS. Due to cost reasons, it is planned to switch off the instance whenever it is not needed. In AWS, the public IP will have a small cost if it is assigned for an instance that is shutdown, and we are trying to avoid it. Do we really need to have a public IP to connect the Jenkins master to the Jenkins slave node? Appreciate your suggestions.
You will always need some kind of public IP.
I guess you are using an elastic ip and your Jenkins 'built-in' node isn't in AWS, and this node is trying to connect with your 'agent' node in AWS.
I'm not really sure but, if your 'built-in' node is accesible from Internet, you could try this:
When you configure a new agent there is an option "Launch agent by connecting it to the controller".
If you configure a node with that option, you should see a command like this in the jenkins user interface:
java -jar agent.jar -jnlpUrl https://JENKINS_URL/computer/AGENT_NAME/jenkins-agent.jnlp -secret 8d7...695 -workDir "local/path"
You shoud run that command in your AWS instance and it will connect to your Jenkins 'built-in' agent, and, AFAIK, you won´t need an elastic ip, but a public ip (dynamic, but it doesn´t matter).
Related
Just need some help as I am new in this system and the previous guy did not provide much documentation.
Currently the jenkins server is hosted in Aws in an instance but this instance only have a private ip address thus the only way of routing to the internet would be through another instance of ours that is hosted in Aws too but in another private ip address. But as we are new to this system, we accidentally stop and start all our instances. Thus now our jenkins are unable to fetch from our github.
To Note the public ip has changed but private ip has not changed
TLDR
-how to allow our instance 1(jenkins) ssh to instance 2(public) that will route out to the internet so as to fetch the code back to instance 1?
Any solution to this? as currently we tried these few method
- create a new job with same configuration worried if file corrupted
- make sure the plugin version is align with the previous working one
- tried to git config --global but there is no config file in jenkins or not under .ssh/config
I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is:
When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig needed to access the kops cluster.
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Here is how I resolved the issue :
Looks like there is a bug with kops library though it shows
**Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api **
when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
Go to page where u can ec2 instance and your k8's instances running
copy "Public IPv4 address" of your master k8 node
post login to ec2 instance on command prompt login to master node as below
ssh ubuntu#<<"Public IPv4 address" of your master k8 node>>
Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there
kubectl get nodes --all-namespaces
I am new to zookeeper and aws EC2. I am trying to install zookeeper on 3 ec2 instances.
as per zookeeper document, I have installed zookeeper on all 3 instances, created zoo.conf and add below configuration:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=localhost:2888:3888
server.2=<public ip of ec2 instance 2>:2889:3889
server.3=<public ip of ec2 instance 3>:2890:3890
also I have created myid file on all 3 instances as /opt/zookeeper/data/myid
as per guideline..
I have couple of queries as below:
whenever I am starting zookeeper server on each instance, it will start in standalone mode.(as per logs)
can above configuration is really gonna connect to each other? port 2889:3889 & 2890:38900 - what these port all about. can I need to configure it on ec2 machine or I need to give some other port against it?
Is I need to create security group to open these connection? I am not sure how to do it in ec2 instance.
How to confirm all 3 zookeeper has started and they can communicate with each other?
The ZooKeeper configuration is designed such that you can install the exact same configuration file on all servers in the cluster without modification. This makes ops a bit simpler. The component that specifies the configuration for the local node is the myid file.
The configuration you've defined is not one that can be shared across all servers. All of the servers in your server list should be binding to a private IP address that is accessible to other nodes in the network. You're seeing your server start in standalone mode because you're binding to localhost. So, the problem is the other servers in the cluster can't see localhost.
Your configuration should look more like:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=<private ip of ec2 instance 1>:2888:3888
server.2=<private ip of ec2 instance 2>:2888:3888
server.3=<private ip of ec2 instance 3>:2888:3888
The two ports listed in each server definition are respectively the quorum and election ports used by ZooKeeper nodes to communicate with one another internally. There's usually no need to modify these ports, and you should try to keep them the same across servers for consistency.
Additionally, as I said you should be able to share that exact same configuration file across all instances. The only thing that should have to change is the myid file.
You probably will need to create a security group and open up the client port to be available for clients and the quorum/election ports to be accessible by other ZooKeeper servers.
Finally, you might want to look in to a UI to help manage the cluster. Netflix makes a decent UI that will give you a view of your cluster and also help with cleaning up old logs and storing snapshots to S3 (ZooKeeper takes snapshots but does not delete old transaction logs, so your disk will eventually fill up if they're not properly removed). But once it's configured correctly, you should be able to see the ZooKeeper servers connecting to each other in the logs as well.
EDIT
#czerasz notes that starting from version 3.4.0 you can use the autopurge.snapRetainCount and autopurge.purgeInterval directives to keep your snapshots clean.
#chomp notes that some users have had to use 0.0.0.0 for the local server IP to get the ZooKeeper configuration to work on EC2. In other words, replace <private ip of ec2 instance 1> with 0.0.0.0 in the configuration file on instance 1. This is counter to the way ZooKeeper configuration files are designed but may be necessary on EC2.
Adding additional info regarding Zookeeper clustering inside Amazon's VPC.
Solution with VPC's public IP addres should be preferable solution since Zookeeper and using '0.0.0.0' should be your last option.
In case when you are using docker in your EC2 instance '0.0.0.0' will not work properly with Zookeeper 3.5.X after node restart.
The issue lies in resolving '0.0.0.0' and ensemble sharing of node addresses and SID order (if you will start your nodes in descending order, this issue may not occur).
So far the only working solution is to upgrade to 3.6.2+ version.
I am using chef to create amazon EC2 instances inside a VPC. I have alloted an elastic IP to new instance using --associate-eip option in knife ec2 server create. How do I bootstrap it without a gateway machine? It gets stuck at "Waiting for sshd" as it uses the private IP of newly created server to ssh into it, though it has an elastic IP allocated?
Am I missing anything? Here is the command I used.
bundle exec knife ec2 server create --subnet <subnet> --security-group-ids
<security_group> --associate-eip <EIP> --no-host-key-verify --ssh-key <keypair>
--ssh-user ubuntu --run-list "<role_list>"
--image ami-59590830 --flavor m1.large --availability-zone us-east-1b
--environment staging --ebs-size 10 --ebs-no-delete-on-term --template-file
<bootstrap_file> --verbose
Is there any other work-around/patch to solve this issue?
Thanks in advance
I finally got around the issue by using the --server-connect-attribute option, which is supposed to be used along with a --ssh-gateway attribute.
Add --server-connect-attribute public_ip_address to above knife ec2 create server command, which will make knife use public_ip_address of your server.
Note: This hack works using knife-ec2 (0.6.4). Refer def ssh_connect_host here
Chef will always use the private IP while registering the EC2 nodes. You can get this working by having your chef server inside the VPC as well. Definitely not a best practice.
The other workaround is, let your chef server be out side of VPC. Instead of bootstrapping the instance using knife ec2 command follow the instructions over here.
This way you will bootstrap your node from the node itself and not from the Chef-server/workstation.
I am trying to add a second node to my Couchbase 2.1.1 cluster on EC2. However when I attempt to add a new server under
Server Nodes > Active Server > Add Server
I get the following error
Attention - Failed to reach erlang port mapper.
Could not connect to "172.31.49.78" on port "4369".
This could be due to an incorrect host/port combination or a
firewall in place between the servers
Another odd thing I noticed is that the second Couchbase instance has a blank public dns. I created it with the "More like this" wizard in the AWS management console. What should I try next? Help is appreciated!
When I want to add a new node to the cluster I open up the web admin console on the new node and click the join cluster option, adding the ip of the current node and the relevant user and password.
You are most likely having the issue because you haven't opened up port 4369 as stated in the error on both nodes, they are needed for node to node configuration. Change your security group on aws to allow this for both nodes.
Visit this link to see which ports you need for node to node and client to node http://docs.couchbase.com/couchbase-manual-2.2/#network-ports
I recommend starting with a fresh instance and installing CB onto it, then going through the process yourself. I don't believe in "pre-cooked" solutions when it is just as easy to set it up on your own.
I have three Couchbase clusters running on AWS, and have had no issues. That being said, I also have my machines configured in a VPC, and resolve to one another using the Hosts file on the machine, but your situation may be different. You'll need to make sure your AWS security groups are configured correctly, whatever network topology you decide upon.