Recently I installed Graphite to monitor my application.
I installed it on an external (cloud) server and used the following configuration:
local_settings.py:
USE_REMOTE_USER_AUTHENTICATION = True
DASHBOARD_REQUIRE_AUTHENTICATION = True
DASHBOARD_REQUIRE_PERMISSIONS = True
Everything works fine, and I can send data using Graphite plugins, and read the data using browser, except that the metrics are public and it is not required to login.
Also I used htaccess but it is not good enough (because there is no simple way to implement logout in Graphite)
I couldn't find how to enforce user authentication before presenting data.
Related
I'm trying to do some authentication inside a Django application using django-auth-ldap via the OpenLDAP client. It's not working so how do I enable some logging?
I CAN make LDAP queries using ldapsearch so fundamentally my config is correct and I tried enabling logging for django-auth-ldap but it just reports an Error(0) which is completely unhelpful.
So how do I enable logging for the OpenLDAP client part of the equation? Ideally I would like to see what queries it is making and using which config is being passed down from django-auth-ldap. I did find ldap.conf but the syntax man page implies there is no logging or debug option.
Stumbled across the answer...
To enable logging via the Django LDAP library, add the following to the settings.py file for your project
AUTH_LDAP_GLOBAL_OPTIONS = {
ldap.OPT_DEBUG_LEVEL: 255
}
The error that this flushed out was that I was using docker-compose and setting some environment values such as LDAP_SERVER="ldap://an.ldap.server.com" - but I should not have been quoting the string as the double-quotes were made part of the value. Removing these got me moving again.
I was looking to run my django and reactjs web application on mobile by connecting it to mac via hotspot, and changing the host to the ipaddress of the mobile. Thus, I changed my localhost to 192.168.43.19 in /etc/hosts/, and thus, my code is easily shared between mobile and mac, and I am able to run the localhost app on my mobile which is connected to mac via hotspot.
The backend is created in django rest framework. The problem is that all the get and post calls to the api created in the backend in django is being converted to options calls, and thus there are no returns, and the code is not working properly. While searching online, it said that the issue is because by default Access-cross-origin-policy is not allowed. To try handling the issue, I added the frontend url in CORS_ORIGIN_WHITELIST in the settings file of the django app, but it didnt work.
The CORS_ORIGIN_WHITELIST value set is the one, where the react code is being run. It is
CORS_ORIGIN_WHITELIST = (
'http://192.168.43.194:3000',
)
It will be really helpful, if someone could recommend me the correct way to handle this?
I am building a simple web app using React.js for the frontend and Django for the server side.
Thus frontend.herokuapp.com and backend.herokuapp.com.
When I attempt to make calls to my API through the react app the cookie that was received from the API is not sent with the requests.
I had expected that I would be able to support this configuration without having to do anything special since all server-side requests would (I thought) be made by the JS client app directly to the backend process with their authentication cookies attached.
In an attempt to find a solution that I thought would work I attempted to set
SESSION_COOKIE_DOMAIN = "herokuapp.com"
Which while less than ideal (as herokuapp.com is a vast domain) in Production would seem to be quite safe as they would then be on api.myapp.com and www.myapp.com.
However, with this value set in settings.py I get an AuthStateMissing when hitting my /oauth/complete/linkedin-oauth2/ endpoint.
Searching google for AuthStateMissing SESSION_COOKIE_DOMAIN yields one solitary result which implies that the issue was reported as a bug in Django social auth and has since been closed without further commentary.
Any light anyone could throw would be very much appreciated.
I ran into the exact same problem while using herokuapp.com.
I even posted a question on SO here.
According to Heroku documentation:
In other words, in browsers that support the functionality, applications in the herokuapp.com domain are prevented from setting cookies for *.herokuapp.com
Heroku blocks cookies from frontend.herokuapp.com and backend.herokuapp.com
You need to add a custom domain to frontend.herokuapp.com and backend.herokuapp.com
The entire answer https://stackoverflow.com/a/54513216/1501643
I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.
How to assign to workers a proxy that requires user name - password and a custom user agent using Selenium, PhantomJS driver with Python bindings.
I've had good success with creating many workers traversing my test website. I can also assign a user agent or a proxy that does not require authorization. But I haven’t figured out how to do both to the same worker at the same time yet.
However the real issue at the moment is assigning a proxy to the workers that require authorization by a user name and password.
The Players:
Selenium 2.33.0 / PhantomJS 1.9.1 / Python 2.7.3 / Ubuntu 12.04
Me:
Nube. Python weeks, Linux days, Selenium hours, PhantomJS -= , SO first post
Searches Yielded:
How do I set a proxy for phantomjs/ghostdriver in python webdriver?
The answers may in fact be there and many other places I have read and re-read, but I can’t connect the dots at my present level.
User Agent solved with this method.
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = (
"Any User Agent string here”)
driver = webdriver.PhantomJS(desired_capabilities=dcap)
Proxy without authorization works with this:
service_args = [
'--proxy=127.0.0.1:9999',
'--proxy-type=http,
]
driver = webdriver.PhantomJS('/usr/local/bin/phantomjs,service_args=service_args)
If both above methods are used I’m unsure how to pass both proxy and UA to the PhantomJS driver. ATM I’m only able to do one or the other and not at all with a proxy that requires authorization.
Goal for this SO thread:
Learn how to assign a proxy that requires user name / password
Assign a custom user agent to the same worker.
Using Selenium, PhantomJS driver with Python bindings.
The end game goal is to assign each worker a unique ip and pull from a pool of user agents. Creating the logic for this I remain optimistic but the proxy with authorization is kicking me at the moment.
As you can tell I’m very new to all of this and would appreciate any help and examples to this particular problem.
Thanks!
EDIT: Below accepted answer is incorrect. Unable to reproduce below solution. Only the proxy with authorization is assigned to the driver. Still unable to assign both proxy and a user agent to the same driver.
Any help or direction would be greatly appreciated.
EDIT.02: Issue resolved. It was never a coding issue. A new proxy provider at the server level assigned a default UA that overrode the above script. Once this was removed all was good.
Assign User Agent by Desired Capabilities
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = (
"Your User Agent String here . . .")
Found API Reference here for the proxy authorization.
Add "--proxy-auth=username:password" to server_args. Like . . .
service_args = [
'--proxy=xxx.xxx.xx.xxx:xxxx',
'--proxy-auth=username:password',
'--proxy-type=http',
]
Then use both when starting the webdriver
driver = webdriver.PhantomJS(desired_capabilities=dcap,service_args=service_args)
This took care of all my issues.
EDIT: Unable to reproduce solution. Only proxy is changed with above method.
EDIT.02: Issue resolved. It was never a coding issue. A new proxy provider at the server level assigned a default UA that overrode the above script. Once this was removed all was good.