Amazon AWS EC2 Web Server Forbids Accessing Image Files - amazon-web-services

I'm running a website using an Amazon Web Services EC2 instance, but I'm having trouble accessing certain images stored in my website's directory. The website can be viewed here: http://jkmrcsolutions.com
The strange thing is, some images can be accessed just fine, and others give a 403 Access Forbidden error. The problem is well illustrated in the images subdirectory: http://jkmrcsolutions.com/images/
Some of the images, like the JKM RC Solutions banner, are easily viewable, while others give the aforementioned 403 error.
I've restarted the EC2 instance and double-checked that the HTTP security group is enabled, but it hasn't solved the problem. Any ideas?
Thanks in advance!
-John

Change the permissions of all the files like,
find /var/www/html/images -type f -exec chmod a+r {} \;
a+r is setting read permissions to all.
You need to be root user to execute this

Related

When I access Jenkins trough an EC2 I get an error

I got an issue. I created a Jenkins AMI that i snapshoted it and created an image from a running ec2 instance with an already configured Jenkins Master on port 8443 with an https certificate. But when I curl to the jenkins instance i get the following:
[1]: https://i.stack.imgur.com/K2tz0.png
I checked Jenkins logs and everything was normal. And my Elatic Load balancer is healthy which means the security groups and other things are working just fine. Does anybody have a clue why is it giving a 403 Forbidden? Another point is that I can even access the GUI
By using curl you're making your life harder but look at some of what comes back:
<meta http-equiv='refresh' content='1;url=/login?from=%2F'/>
If you've done HTML programming, this is one way of having the browser execute a redirect. Why Jenkins doesn't do some sort of HTTP redirect I don't know but the code is telling you that, after 1 second, redirect to the url /login.
curl isn't going to interpret the HTML for you, unlike the browser. Jenkins is working fine - you just need to follow what the HTML and JavaScript code is telling you to do through curl.
The 403 error is the Jenkins application specifically saying your current user is not allowed access to the current action.
It appears you’re not logged in so your action is treated as an anonymous user. If the anonymous user should have the permissions to access this action you will need to add them.

Vhosts domain 503 unavailable after changing user permisson on Ubuntu (AWS Lightsail)

At the recent time, I was trying to setup an SFTP on my AWS lightsail - Ubuntu Plesk instance. Once I noticed my current user doesn't have access to vhosts/example.com/httpdocs folder, I tried to give the current user access rights with giving this command on ssh :
- sudo chown -R (my-username)
after that I sucessfully got the access to desired folder on my sftp client.
But unfortunately, there was something wrong on its domain when I accessed in browser with 503 Error. And also the file manager in Plesk returned an Error 13.
after recover the the user permission with this command :
- /usr/local/psa/bin/repair --restore-vhosts-permissions
the file manager was back to normal, but not the website domain : which still has 503 error.
any idea what's wrong with that? I believe this has to be user permission problem, but couldn't find anywhere else to fix it. not to mention, I am newbie on Ubuntu server.
hope to find some decent answer here :) Thanks and have a good day!
So, After few months of deploying VPS in AWS Lightsail with Plesk, There are few things that could lead this problem happen.
1. Permission is not enough for the directory, make sure you have at least 755 for the root or desired directory you want to access.
2. The PHP version and Nginx/Apache Configuration can also be the issue. In some cases, The current Plesk Onyx delivered along with Nginx and Apache, I always choose "FastCGI application served by Apache" and it is often solve the problem. This setting can be found "websites & Domains > PHP Settings"
3. Missing Index.php or Index.html or reference file for indexing. So the server is confused to interpret which file should be access first.
I hope this solve someone else problem. Discussion can be continued on the comment. :) Have a good day!

"Access denied: Anonymous users does not have storage.objects.list access to bucket" when trying to host a static website from a Google Bucket

I'm trying to follow the instructions on https://cloud.google.com/storage/docs/hosting-static-website to host a static website from a Google Bucket. I've created the CNAME alias, uploaded the content to a bucket named the same as the website (www.kurtpeek.com), and checked "Share publicly" for all items. However, when I browse to the website I see the following:
I've read on http://tekhoow.blogspot.be/2015/12/soving-accessdenied-on-google-cloud.html that this problem can be solved from the command line using gsutil. However, I've done it before for a different website using the web-based console, I just don't remember how.
I suspect it should be somewhere in the "IAM" menu, but I can't seem to find the 'read public' options similar to the commands.
Can anyone point out the 'missing ingredient' to make the website work?
I finally did use the command-line solution:
~$ gsutil web set -m index.html gs://www.kurtpeek.com
Setting website configuration on gs://www.kurtpeek.com/...
and now the website works as expected.

Uploading data to Amazon S3 directly from a URL [duplicate]

Is it possible to upload a file to S3 from a remote server?
The remote server is basically a URL based file server. Example, using http://example.com/1.jpg, it serves the image. It doesn't do anything else and can't run code on this server.
It is possible to have another server telling S3 to upload a file from http://example.com/1.jpg
upload from http://example.com/1.jpg
server -------------------------------------------> S3 <-----> example.com
If you can't run code on the server or execute requests then, no, you can't do this. You will have to download the file to a server or computer that you own and upload from there.
You can see the operations you can perform on amazon S3 at http://docs.amazonwebservices.com/AmazonS3/latest/API/APIRest.html
Checking the operations for both the REST and SOAP APIs you'll see there's no way to give Amazon S3 a remote URL and have it grab the object for you. All of the PUT requests require the object's data to be provided as part of the request. Meaning the server or computer that is initiating the web request needs to have the data.
I have had a similar problem in the past where I wanted to download my users' Facebook Thumbnails and upload them to S3 for use on my site. The way I did it was to download the image from Facebook into Memory on my server, then upload to Amazon S3 - the full thing took under 2 seconds. After the upload to S3 was complete, write the bucket/key to a database.
Unfortunately there's no other way to do it.
I think the suggestion provided is quite good, you can SCP the file to S3 Bucket. Giving the pem file will be a password less authentication, via PHP file you can validate the extensions. PHP file can pass the file, as argument to SCP command.
The only problem with this solution is, you must have your instance in AWS. You can't use this solution if your website is hosted in other Hosting Providers and you are trying to upload files straight to S3 Bucket.
Technically it's possible, using AWS Signature Version 4, Assuming your remote server is the customer in the image below, you could prepare a form in the main server, and send the form fields to the remote server, for it to curl it. Detailed example here.
you can use scp command from Terminal.
1)using terminal, go to the place where there is that file you want to transfer to the server
2) type this:
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
N.B. Add "ec2-user#" before your ec2blablbla stuffs that you got from the Ec2 website!! This is such a picky error!
3) your file will be uploaded and the progress will be shown. When it is 100%, you are done!

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html