Just need some help as I am new in this system and the previous guy did not provide much documentation.
Currently the jenkins server is hosted in Aws in an instance but this instance only have a private ip address thus the only way of routing to the internet would be through another instance of ours that is hosted in Aws too but in another private ip address. But as we are new to this system, we accidentally stop and start all our instances. Thus now our jenkins are unable to fetch from our github.
To Note the public ip has changed but private ip has not changed
TLDR
-how to allow our instance 1(jenkins) ssh to instance 2(public) that will route out to the internet so as to fetch the code back to instance 1?
Any solution to this? as currently we tried these few method
- create a new job with same configuration worried if file corrupted
- make sure the plugin version is align with the previous working one
- tried to git config --global but there is no config file in jenkins or not under .ssh/config
Related
I'm running GitLab CE privately within an AWS VPC that I access via a VPN instance. I installed the latest AWS AMI of GitLab CE, then upgraded it to the latest version of GitLab. I've gotten everything working, except for one thing: Whenever I reboot the instance in EC2, my /etc/gitlab/gitlab.rb's external_url is reset to the IP address of my VPC's SNAT instance, almost as if GitLab is asking "what is my public IP?" and then changing the setting's value to that answer. I keep changing it back to the internal hostname provided by my VPC's Route 53 hosted zone, https://gitlab.corp.mydomain.com, but it's reset every time I reboot the instance. To be clear, this GitLab instance is not exposed to the internet, but it does have egress to the internet through the SNAT (e.g., to update OS packages).
How can I force my internal hostname to stick? I can still access GitLab through my browser at https://gitlab.corp.mydomain.com, so perhaps this doesn't matter?
After a quick search I have found this.
https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/2021
Summary of the content:
It seems like Gitlabs hostname detection is not working correctly if public IPs are deactivated in EC2. To enable usage in such cases Gitlab replaces the hostname with the assigned IP. Gitlab will return to the hostname if it can resolve it later, at least from version 10.1.3.
As it works for you, I would simply keep the configuration.
I have an AWS instance with WHM/Cpanel installed on it.
I had upgraded the instance from Micro to Small, did backup the EBS and reattached it, I also reattached the Public IP and Setup the Previous Private IP to the new volume.
I am able to login to CPanel, can login via ssh, but when I open my websites hosted on this instance, I get a CPanel Default Page error, it does not loads the actual homepage of my website.
My Checklist -
Old Volume attached
Old Public IP Attached
Old Private IP attached (as Secondary IP)
What else I am missing
Can you please help
Thanks
It was Easy.
I went to WHM -> IP Functions -> IP Migration Wizard and then I put the new local ip and it fixed up everything
Thanks for looking into this.
So, I've been working locally in a vagrant ubuntu box for the past month: I've spent a lot of time working on customizing it and installing exactly all the software I want on it. I started all of this through the normal vagrant tutorial (aka, nothing special). I packaged my local vagrant box into a package.box file. Now, I want to move my development environment (e.g. package.box file) to an Amazon EC2 instance on AWS. I know I'm not supposed to ask for software recommendations, but my question is basically: is this possible to do and, if it is, could you point me to some examples of people doing it? I've read that packer might be an option, but it looks to me (a very inexperienced perspective) that maybe I should have started with that instead of trying to use it now. Any help would be appreciated - I don't want to spend a couple weeks setting up a new environment when I have one locally set up.
Edit:
Progress! I followed #error2007s link and followed the tutorial. I'm at the point where I've uploaded the VMDK image to s3 and provisioned an instance using it (all done automatically with the ec2-import-instance command on the CLI). However, I don't see a Public IP to access the new instance after I start it up.
I think this is related to cloud-init somehow, but I'm not sure what that is really. I tried it with both the /etc/cloud/cloud.cfg file that came with the box as well as the one listed here and neither of the two boxes I uploaded gave me a Public IP to access.
Edit 2:
Here are some things I see in the Console (They all seem right to me, but a more experienced eye might see something wrong):
subnet info:
Auto-assign Public IP: yes
Network ACL:
VPC info:
DNS resolution: yes
DNS hostnames: yes
ClassicLink DNS Support: no
VPC CIDR: 172.31.0.0/16
DHCP Option Set:
Options: domain-name = ec2.internal domain-name-servers = AmazonProvidedDNS
From my perspective, those all look right, or am I missing something?
I assigned an Elastic IP per these instructions, but when I ssh ec2-user#<elastic-ip>, it says ssh: connect to host <elastic-ip> port 22: Connection refused. The security group assigned to the instance is set to allow all protocols on all ports. Also, this is the first time I encounter a Elastic IP and I'm unsure what exactly it is doing.
Amazon enables you to transfer your Vm to AWS as a EC2 instance. Check this tutorial this is more simple.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingVirtualMachinesinAmazonEC2.html
You want to use the Vagrant AWS provider found here:
https://github.com/mitchellh/vagrant-aws
This is a Vagrant 1.2+ plugin that adds an AWS provider to Vagrant,
allowing Vagrant to control and provision machines in EC2 and VPC.
This will allow you to provision your AWS instances using Vagrant, allowing you to migrate the same local development environment to an AWS EC2 instance.
There is a good tutorial here:
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
Hi I have found these articles but I have not yet tested them myself. Im still in the middle of organizing my personal notes and identifying my technology stack. I intend to have a Homestead vagrant box be replicated as an EC2 instance, so I wont have to configure the instance(s) manually.
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
https://www.tothenew.com/blog/using-vagrant-to-deploy-aws-ec2-instances/
https://foxutech.com/how-to-deploy-on-amazon-ec2-with-vagrant/
https://blog.scottlowe.org/2016/09/15/using-vagrant-with-aws/
https://devops.com/devops-primer-using-vagrant-with-aws/
I find their approaches similar. The only thing that I am worried at is the "vagrant add box" part.
I asked myselft, what if I had to do this setup again for familiarization purposes, what will happen since I already added a vagrant box (the dummy one, as instructed in the tutorials) previously.
I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide.
Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the guestbook example with a few changes to allow my ec2 PublicIP, and a few changes for the VPC.
On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration.
Update: please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore.
Please check the following documentation page for workarounds: https://kubernetes.io/docs/user-guide/services/
I'm trying to build a t2.mirco Ubuntu 12.04 EC2 environment running Airtime from Sourcefabric, however despite the installation going through OK I cannot access the login page via the address that the installer provided. I have change my security settings several times but I feel that it might have something to do with it. I have ran system checks to see if airtime is working and it returns a perfectly operating copy every time. The address that i'm trying to access the installation on is http://ip-172-31-5-46.us-west-2.compute.internal does anyone know why Amazon AWS is reacting this way?
The URL you just provided is EC2's internal DNS address (note the ".internal" at the end), if you want it to be accessibly publicly you'll need to assign an Elastic IP to the EC2 instance, or auto assign a public DNS on creation of the instance
Amazon docs for reference