i was created one ec2 instance and requirement Jenkins and Vesta cp install on same instance but on different port with apache 2 server also want to active ,now 3 conditions also done but jenkins & vesta cp can't goes live getting can't reach this page error showing..which kind changes in html file
if i will put ip address of instance want to acces jenkins and vesta cp site,which kind of vhsnge in html pages.
Related
I am new to AWS and recently I was trying to access a webpage using an EC2 instance. I uploaded the webpage using the following bash commands in the User Data field while creating the instance:
#!/bin/bash
yum update -y
yum -y install httpd
systemctl enable httpd
systemctl start httpd
echo '<html><h1>Sample Webpage</h1></html>' > /var/www/html/index.html
I noticed that the public IP address of the instance directed me to the Apache Web Server's test page when the names of the security group and the instance were different, but to the desired webpage when the names were same.
Could anyone please explain why is it so?
There is nothing wrong with your user_data. It works exactly as expected. Whatever you are checking, does not involve this code, thus please double check your instances and their user data.
The Following is my EC2 User Data:
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
In Security Group SSH 22 Port and HTTP 80 Port is Open.
Yet when I try accessing http://public_ip_of_instance the HTTP Apache page doesn't load.
Also, on the Instance Apache is not installed when I checked sudo systemctl status httpd.
I then manually tried it on the EC2 Server and it worked. Then I removed it through yum remove as I wanted to see whether User Data works.
I stopped the Instance and started again but I observed that the User Data Script doesn't work as I am unable to access http page through browser and also on Instance http is not installed.
Where is the actual issue? Some months back this same thing worked on another instance I remember.
Your user data is correct. Whatever is happening with your website is not due to the user data code that you provided.
There could be many reasons it does not work. Public IP of the instance has changed, as always happens when you stop/start the instance. Instance may have per-existing software that clashes with httpd.
Here's some general advice on running UserData once or each startup.
Short answer as John mentioned in the comments EC2's only run the UserData (aka Bootstrap) script once on initalization.
The user data Bash/Powershell is Infrastructure-As-Code. You deploy the script and it installs and configures the machine.
This causes confusion with everyone starting AWS. When you think about it though it doesn't make sense to run the UserData script each time when the PCs already been configured.
What people do often instead is make "Golden Images" (aka Amazon Machine Images - AMI's) of pre-setup EC2s, typically for PCs that take long time to install/configure. The beauty of this is you can setup AutoScaleGroups to use the images which saves any long installation during a scale up event.
Pro Tip: When developing an UserData script run through and test it manually on the EC2. Trust me its far quicker than troubleshooting unattended EC2 UserData errors.
Long answer: you can run the UserData on each boot of the machine using Mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
For all those who will run into this problem, first of all check the log with the command:
sudo cat /var/log/cloud-init-output.log
then if you notice connection errors to the various repositories, the reason is because you don't have an internet connection. However, if once inside your EC2 you manage to launch the update and install commands, then the reason why they fail in the UserData is because your EC2 takes a few seconds to get the Internet connection and executes the commands before having it. So to solve this problem, just add this command after #!/bin/bash
#!/bin/bash
until ping -c1 8.8.8.8 &>/dev/null; do :; done
sudo yum update -y
...
This will prevent your EC2 from executing commands before an internet connection is established
I have a Django app running on a Linux server under NGINX. The "user" for the Django app is www-data. In this app, I try to connect to AWS IOT, and to do that I believe that the AWS boto3 package tries to find the AWS credentials here: ~/.aws/credentials. The problem is that for the user www-data there is no such path! When I login to the server (using my real username), and I try to run a script that connects to AWS, it connects just fine. Let's say my username is "joe". There is indeed a file /home/joe/.aws/credentials that contains the correct credentials. This is why the script works fine when I run as user "joe". But when the Django app is running, it doesn't work because there is no login user www-data, ie there is no file /home/www-data/.aws/credentials.
I understand that AWS boto3 let's us set an environment variable to specify a non-standard path to the credentials file. This env variable is AWS_SHARED_CREDENTIALS_FILE and there is also a AWS_CONFIG_FILE.
However, I don't know how to set an environment variable in Django for user www-data so that boto3 can now use that environment variable to specify the AWS credentials path.
Anyone know how to do this? Note that this is a production environment so I can't use any local server tricks/hacks.
If you are running your Django App from an EC2 instances the best practice is associate an IAM Role to the instance.
I had exactly the same issue but in a docker container and with apache instead of NGINX. For the container, we can do the following:
Edit /etc/apache2/envvars where environment settings for apache live:
echo "export AWS_SHARED_CREDENTIALS_FILE=/root/.aws/credentials" >> /etc/apache2/envvars
change ownership of aws credentials file parent directory:
chown -R www-data:www-data /root
restart
apache:
service apache2 restart
Note that here ownership of root's home directory was changed. This is because aws needs certain file permissions and ownership for the credentials file (which means only www-data will be able to use this login from now on.) Maybe it would be a better practice (especially if you're runnning on an actual machine and not a container) to copy the credentials file to a new location and follow the same steps:
mkdir -p /home/joe/workdir/.aws/
cp /home/joe/.aws/credentials /home/joe/foo/.aws/credentials
sudo chown -R www-data:www-data home/joe/foo/
sudo echo "export AWS_SHARED_CREDENTIALS_FILE=/home/joe/foo/.aws/credentials" >> /etc/apache2/envvars
sudo service apache2 restart
I don't know how well this fits to the NGINX config, but hope this helps a bit.
what do I want to do?
Step1: Mount a S3 Bucket to an EC2 Instance.
Step2: Install a FTP Server on the EC2 Instance and tunnel ftp-requests to files in the bucket.
What did I do so far?
create bucket
create security group with open input ports (FTP:20,21 - SSH:22 - some more)
connect to ec2
And the following code:
wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/s3fs/s3fs-1.74.tar.gz
tar -xvzf s3fs-1.74.tar.gz
yum update all
yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel openssl-devel mailcap
cd s3fs-1.74
./configure --prefix=/usr
make
make install
vi /etc/passwd-s3fs # set access:secret keys
chmod 640 /etc/passwd-s3fs
mkdir /s3bucket
cd /s3bucket
And cd anwers: Transport endpoint is not connected
Dunno what's wrong. Maybe I am using the wrong user? But currently I only have one user (for test reasons) except for root.
Next step would be the ftp tunnel, but for now I'd like getting this to work.
I followed these instructions now. https://github.com/s3fs-fuse/s3fs-fuse
I guess they are calling the API in background too, but it works as I wished.
One possible solution to mount S3 to an EC2 instance is to use the new file gateway.
Check out this:
https://aws.amazon.com/about-aws/whats-new/2017/02/aws-storage-gateway-supports-running-file-gateway-in-ec2-and-adds-file-share-security-options/
http://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html
Point 1
Whilst the other answerer is correct in saying that S3 is not built for this, it's not true to say a bucket cannot be mounted (I'd seriously consider finding a better way to solve your problem however).
That being said, you can use s3fuse to mount S3 buckets within EC2. There's plenty of good reasons not to do this, detailed here.
Point 2
From there it's just a case of setting up a standard FTP server, since the bucket now appears to your system as if it is any other file system (mostly).
vsftpd could be good choice for this. I'd have a go at both and then post separate questions with any specific problems you run into, but this should give you a rough outline to work from. (Well, in reality I'd have a go at neither and use S3 via app code consuming the API, but still).
1. I was using this tutorial: https://www.youtube.com/watch?v=wNr7YqjjzOY
2. I installed my first EC2 based on Ubuntu:
3. I connected via ssh:
But I can't add a new file or edit existing files:
Shall I use any additional things? By the way, I have the same problem in WinSCP.
sudo bash
echo "test" > test.html
Of course, this is just for testing.