I have an ec2 machine on AWS where I've installed Jenkins. Given that only my co-workers need to interface with this machine, can I simply use a self-signed certificate? This machine is meant to be used just by internal teams so I don't think there are problems about this, I just want to be sure about this.
Related
This is my first time setting up a dynamic website, so bare with me. My goal is to have SSL/https working on my php single instance aws Elastic beanstalk web app.
I already know that with a load balancer SSL is easy to set up and ACM certificates only work with load balancer.
I want single instance since it is cheaper. My project is small, don't expect a lot of traffic at most 1 user per day.
... back to problem, I did some research and came across this link, which is a "how to" from amazon:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-php.html
The problem I'm running into is the part where I'm suppose to put my "certificate contents here".
From research what goes here is a SSL certificate from a third party. When I purchased my domain from namecheap , I also purchased PostivieSSL. Now where I'm confused is how to create this "cerificate contents". I found this link on namecheap:
https://www.namecheap.com/support/knowledgebase/article.aspx/9446/14/generating-csr-on-apache-opensslmodsslnginx-heroku/
I know that I have to generate a CSR through SSH with commands ,where they will ask info about my site which is needed to make the request and get the certificate. It says I have to do this where I'm hosting my website. My question is how do I do this in elastic beanstalk? or is there another way to do this or am I understanding wrong. I'm a bit lost here
I've spent 2 days researching but cant find how to do this. I've found some people linking GitHub repositories doing this in some other similar questions but they don't seem to help me understand how to do this.
I was more or less in your shoes, but with the Java app instead of PHP. The way I see it, you have got three broad tasks to solve.
Generate proper certificate. You can either go for the one you already have from PositiveSSL or generate a free one for test purposes with Let's Encrypt and certbot (this might give you more control and understanding over what (sub)domain you're using it for). The end result is a set of certificate and key for the desired domain.
Make sure the certificate and key are on the Elastic Beanstalk instance in question and are being picked up by your web server. For this you need to properly package your app before deploying it, paying attention to the paths and the AWS docs for the single instance which you mentioned. Paste your certificate data in .ebextensions/https-instance.config, and it will be deployed as files under specified paths. Once you're done with the whole process later on, consider sourcing private certs and keys from your S3, never commit private data to version control.
Make sure the HTTPS traffic flows through. For this you'll need to make sure that your Elastic Beanstalk VPC security group has an inbound rule for port 443 (also covered in the AWS docs).
I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.
We are looking to migrate to AWS in the start of the new year.
One of the issues we have is the fact that some of the applications that we will be migrating over have been configured with hardcoded IP addresses (DB Hostnames).
We will be using ELB's to fully utilise the elasticity and dynamic nature of AWS for our infrastructure. With this in mind, those IP addresses that were static before will now be dynamic (so frequently assigned new IPs).
What is the best approach to solving these hardcoded values?
In particular IP addresses? I appreciate usernames, passwords etc. can be placed into a single config file and called using ini function etc.
I think one solution could be:
1) To make an AWS API call to query what the IP address of the host is? Then call the value that way.
Appreciate any help with this!
You should avoid hard code IP addresses, and use the hostname for the referenced resource. With either RDS or a self hosted DB running on EC2, you can use DNS to resolve the IP by host name at run time.
Assuming you are using CodeDeploy to provision the software, you can use the CodeDepoly Lifecycle Event Hooks to configure the application after the software as been installed. And after install hook could be configured to retrieve your application parameters, and make them available to the application before it starts.
Regarding the storage of application configuration data, consider using the AWS Parameter Store. Using this as a secure and durable source of application configuration data, you can retrieve the DB host address and other application parameters at software provision time, using the features of CodeDeploy mentioned above.
We have a number of 3rd party systems which are not part of our AWS account and not under our control, each of these systems have an internal iis server set up with dns which is only available from the local computer. This iis server holds an API which we want to be able to utilise from our EC2 instances.
My idea is to set up some type of vpn connection between the ec2 instance and the 3rd party system so that the ec2 instance can use the same internal dns to call the api.
AWS provide direct connect, is the correct path go down in order to do this? If it is, can anyone provide any help on how to move forward, if its not, what is the correct route for this?
Basically we have a third party system, on this third party system is an IIS server running some software which contains an API. So from the local machine I can run http://<domain>/api/get and it returns a JSON lot of code. However in order to get on to the third party system, we are attached via a VPN on an individual laptop. We need our EC2 instance in AWS to be able to access this API, so need to connect to the third party via the same VPN connection. So I think I need within AWS a separate VPC.
The best answer depends on your budget, bandwidth and security requirements.
Direct Connect is excellent. This services provides a dedicated physical network connection from your point of presence to Amazon. Once Direct Connect is configured and running your will then configure a VPN (IPSEC) over this connection. Negative: long lead times to install the fibre and relatively expensive. Positives, high security and predicable network performance.
Probably for your situation, you will want to consider setting up a VPN over the public Internet. Depending on your requirements I would recommend installing Windows Server on both ends linked via a VPN. This will provide you with an easy to maintain system provided you have Windows networking skills available.
Another good option is OpenSwan installed on two Linux system. OpenSwan provides the VPN and routing between networks.
Setup times for Windows or Linux (OpenSwan) is easy. You could configure everything in a day or two.
Both Windows and OpenSwan support a hub architecture. One system in your VPC and one system in each of your data centers.
Depending on the routers installed in each data center, you may be able to use AWS Virtual Private Gateways. The routers are setup in each data center with connection information and then you connect the virtual private gateways to the routers. This is actually a very good setup if you have the correct hardware installed in your data centers (e.g. a router that Amazon supports, which is quite a few).
Note: You probably cannot use a VPN client as the client will not route two networks together, just a single system to a network.
You will probably need to setup a DNS Forwarder in your VPC to communicate back to your private DNS servers.
Maybe sshuttle can do, what you need. Technically you can open ssh tunnel between your EC2 and remote ssh host. It can also deal with resolving dns requests at remote side. That is not perfect solution, since typical VPN has fail over, but you can use it as starting point. Later, maybe as foll back, or for testing purposes.
Usually, for security reasons, production SSL certificates and other secrets are controlled by very limited group of people in a company, while staging certificates can be self-signed and used by all developers and DevOps. As I can see in Boxfuse documentation, keystore is supposed to be included in application build artifacts and production and dev VM images are identical, which is against the mentioned DevOps practice. Does Boxfuse support this scenario (probably undocumented) or there are workarounds for production deployments?
One solution is to include one key store per environment (you can select the correct one at runtime based on the BOXFUSE_ENV environment variable) and pass the keystore password as an environment variable on instance startup. See https://cloudcaptain.sh/docs/commandline/run#envvars