I have an issue that is described in this ticket.
I can´t do collectstatic uploads with django locally to our static.somesite.com since S3 adds s3.amazon.com to the url and then invalidates their own *.s3.amazon.com certificate.
I have set a dns pointer for static.somesite.com that points to the ip of the s3 service.
I have the AWS_S3_SECURE_URLS = False set.
Not sure how to solve it yet. This is the full error message. I understand completely why it is happening, there has to be a workaround? On our production server this works just fine. Just cant find the settings.
boto.https_connection.InvalidCertificateException:
Host static.somesite.com.s3.amazonaws.com returned an invalid certificate
(remote hostname "static.somesite.com.s3.amazonaws.com" does not match certificate)
{
'notAfter': 'Apr 9 23:59:59 2015 GMT',
'subjectAltName': (
('DNS', '*.s3.amazonaws.com'),
('DNS', 's3.amazonaws.com')),
'subject': (
(('countryName', u'US'),),
(('stateOrProvinceName', u'Washington'),),
(('localityName', u'Seattle'),),
(('organizationName', u'Amazon.com Inc.'),),
(('commonName', u'*.s3.amazonaws.com'),)
)
}
Been digging in the code for the transport app that I have been using. Seemed that it was picking up config settings from somewhere besides my django project settings and was overriding them.
A few years ago I was testing out google cloud storage for a google app engine test project which meant I installed "Gsutils" package globally. Guess what? Gsutils uses Boto too! So once I found out that I could set a boto config file I started looking for that. Sitting on OSX no file ~/.boto could be seen in the Finder or when listing the files in my home directory with ls -al. Alas, when I tried to create it with nano ~/.boto voilá! There was heaps of settings already there from the time I used Gsutils.
Once in there I disabled the
#https_validate_certificates = True
setting and everything works like a charm now.
Related
I have been asking everywhere I could, I saw some answers here that mentioned adding a host entry for 127.0.0.1 but since my dev serve is on an Ubuntu VM locally I am not sure how that would work.
Anyways, I actually copied the URI from GitHub and added it to my settings file:
SOCIAL_AUTH_GITHUB_KEY = os.environ.get('SOCIAL_AUTH_GITHUB_KEY')
SOCIAL_AUTH_GITHUB_SECRET = os.environ.get('SOCIAL_AUTH_GITHUB_SECRET')
SOCIAL_AUTH_GITHUB_REDIRECT_URI = 'http://127.0.0.1:8000/complete/github/'
I look at http vs https and everything else. I even typed instead of pasting.
If you have any other suggestions how I could fix that, I would be so very happy.
I have a feeling it is because maybe I am using 127.0.0.1 instead of a domain, but google worked fine like that...
I am using Django with Nginx. My dev enviornment mirrors my prod environment. In development I go through Nginx to access Django in local dev (using docker-compose). Now I am working on making my website more robust security-wise as per Mozilla Observatory. My site it a B right now. The big thing I am working on next is getting the Content Security Policy (CSP) for my website configured. Not only do I want to get my site to an A because gamification, I also want to avoid having XSS attack planes.
After some searching I found Django CSP which looks great. I installed it, added the middleware, and then add some CSP configuration in my settings.py like this:
CSP_DEFAULT_SRC = ("'none'")
CSP_FONT_SRC = ("https://fonts.gstatic.com")
CSP_IMG_SRC = ("'self'", "https://www.google-analytics.com")
CSP_SCRIPT_SRC = (
"'self'",
"https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js",
"https://code.jquery.com/jquery-3.2.1.slim.min.js",
"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js",
"https://www.google-analytics.com/analytics.js",
"https://www.googletagmanager.com/gtag/js",
"https://www.googletagmanager.com/gtm.js;",
)
CSP_STYLE_SRC = (
"'self'",
"https://fonts.googleapis.com/",
"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/",
)
I fire up my website in local dev and I see this error message in Firefox dev tools:
Content Security Policy: Couldn’t parse invalid host 'http://localhost/static/css/
Why is localhost invalid? Is it that CSPs do not really work in development? I'd really prefer not to "test my CSP code live" in production if I don't have to. Is there a workaround for this? I have searched a bit and I have not really found anything. This question has the exact error message but it seems to be more related to potential malware in browser extensions. I am guessing there is additional config I can tweak to get the CSP to recognize 'localhost' as valid but I am unsure where to look next. Any help is appreciated! Thanks.
Update: I am now seeing the site work in Dev with the new CSP in Edge, Safari, and Chrome. The only place it is broken is with Firefox. I cleared the cache and did a hard refresh but it is still saying localhost is not valid.
I have a react microservice (via nginx) deployed onto google cloud run with its environment variable for the backend set to another google cloud run instance that's running gunicorn which is serving the backend.
My Flask app is set up following everything I could find about allowing CORS:
app = Flask(__name__)
app.config.from_object(config)
CORS(app, resources={r"/*": {"origins": "*"}})
app.config['CORS_HEADERS'] = 'Content-Type'
return app
# Different file, a blueprint's urls:
#blueprint.route('/resources')
#cross_origin()
def get_resources():
...
Yet I'm still getting the dreaded Access to XMLHttpRequest at 'https://backend/resources/' from origin 'https://frontend' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Does anyone have any insight into this/know where else to look in figuring this out? I wanted to set up GKE with my microservices but took the path of least resistance initially to get a POC up in the cloud. I have the backend speaking with my Cloud SQL instance, and I'm so close!!
Thank you
You've set up more than you need to. Unless you need to provide different CORS access for different endpoints, the simplest example just requires calling CORS(app):
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
#app.route("/resources")
def get_resources():
return "Hello, cross-origin-world!"
if __name__ == "__main__":
app.run('0.0.0.0', 8080, debug=True)
And you'll see that the header is present:
$ curl -I localhost:8080/resources
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 26
Access-Control-Allow-Origin: *
Server: Werkzeug/1.0.1 Python/3.7.4
Date: Tue, 21 Apr 2020 17:19:25 GMT
Everything was self inflicted.
As Dustin Ingram mentioned, if you enable CORS for Flask, it will work. I still have NO idea why I was getting the CORS issues at all, I've had CORS enabled for my Flask app from the get go.
After I nuked everything and redeployed, the CORS issues disappeared. However, I still ended up getting either 404s, 405s, and 308s.
There were a couple of issues, all my shortcomings, that in combination gave me those issues. In create-react-app (webpack I think is doing it), environment variables passed into the docker runtime does not get respected, so the environment variable I set in Cloud Run wasn't working at all. Currently I opted for the process.env.VARIABLE || 'hardcoded-url' route. Once I got that figured out, I also remembered trailing slashes in flask URLs are bad.... Those gave me 308s, permanent redirects. Once I got that figured out, I realized during my manual deployments, I wasn't switching the cloud build image to the latest. Sigh. Once I started deploying the latest images, everything started working. Yay!
Thanks Dustin and Gabe for spending your time on my foolishness.
I just recently wrestled with this as well... My issue was trying to use some JS library to make my URL requests "easier" and instead was munging the headers on the request side (not the server side). Switched to just using straight XMLHttp and it started working fine. I also switched from application/json to application/x-www-form-urlencoded. I don't know if that made a difference or not, but including it for completeness.
You also shouldn't (I say shouldn't, but you know how that goes) need anything OTHER than:
CORS(app). All the #cross-region stuff, and the config pieces are all only there to make a narrower CORS access so it's not wide open, but you have it wide open anyway in your initial code (CORS(app, resources={r"/*": {"origins": "*"}}) is the same I believe as CORS(app)).
Long story, short, try looking at the request object, rather than the Flask side.
Edit: Adding the request code that worked for me after I couldn't get the "fetch" library working:
var xhttp = new XMLHttpRequest();
xhttp.open("POST", <url>, true);
xhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded")
xhttp.send(Data)
I had the same problem, it was solved by setting Allow unauthenticated invocations in Cloud Run. However, this should only be done with testing and for a production environment you'll have to configure Cloud IAM.
At the recent time, I was trying to setup an SFTP on my AWS lightsail - Ubuntu Plesk instance. Once I noticed my current user doesn't have access to vhosts/example.com/httpdocs folder, I tried to give the current user access rights with giving this command on ssh :
- sudo chown -R (my-username)
after that I sucessfully got the access to desired folder on my sftp client.
But unfortunately, there was something wrong on its domain when I accessed in browser with 503 Error. And also the file manager in Plesk returned an Error 13.
after recover the the user permission with this command :
- /usr/local/psa/bin/repair --restore-vhosts-permissions
the file manager was back to normal, but not the website domain : which still has 503 error.
any idea what's wrong with that? I believe this has to be user permission problem, but couldn't find anywhere else to fix it. not to mention, I am newbie on Ubuntu server.
hope to find some decent answer here :) Thanks and have a good day!
So, After few months of deploying VPS in AWS Lightsail with Plesk, There are few things that could lead this problem happen.
1. Permission is not enough for the directory, make sure you have at least 755 for the root or desired directory you want to access.
2. The PHP version and Nginx/Apache Configuration can also be the issue. In some cases, The current Plesk Onyx delivered along with Nginx and Apache, I always choose "FastCGI application served by Apache" and it is often solve the problem. This setting can be found "websites & Domains > PHP Settings"
3. Missing Index.php or Index.html or reference file for indexing. So the server is confused to interpret which file should be access first.
I hope this solve someone else problem. Discussion can be continued on the comment. :) Have a good day!
I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html