I'm trying to do some authentication inside a Django application using django-auth-ldap via the OpenLDAP client. It's not working so how do I enable some logging?
I CAN make LDAP queries using ldapsearch so fundamentally my config is correct and I tried enabling logging for django-auth-ldap but it just reports an Error(0) which is completely unhelpful.
So how do I enable logging for the OpenLDAP client part of the equation? Ideally I would like to see what queries it is making and using which config is being passed down from django-auth-ldap. I did find ldap.conf but the syntax man page implies there is no logging or debug option.
Stumbled across the answer...
To enable logging via the Django LDAP library, add the following to the settings.py file for your project
AUTH_LDAP_GLOBAL_OPTIONS = {
ldap.OPT_DEBUG_LEVEL: 255
}
The error that this flushed out was that I was using docker-compose and setting some environment values such as LDAP_SERVER="ldap://an.ldap.server.com" - but I should not have been quoting the string as the double-quotes were made part of the value. Removing these got me moving again.
Related
I am building a simple web app using React.js for the frontend and Django for the server side.
Thus frontend.herokuapp.com and backend.herokuapp.com.
When I attempt to make calls to my API through the react app the cookie that was received from the API is not sent with the requests.
I had expected that I would be able to support this configuration without having to do anything special since all server-side requests would (I thought) be made by the JS client app directly to the backend process with their authentication cookies attached.
In an attempt to find a solution that I thought would work I attempted to set
SESSION_COOKIE_DOMAIN = "herokuapp.com"
Which while less than ideal (as herokuapp.com is a vast domain) in Production would seem to be quite safe as they would then be on api.myapp.com and www.myapp.com.
However, with this value set in settings.py I get an AuthStateMissing when hitting my /oauth/complete/linkedin-oauth2/ endpoint.
Searching google for AuthStateMissing SESSION_COOKIE_DOMAIN yields one solitary result which implies that the issue was reported as a bug in Django social auth and has since been closed without further commentary.
Any light anyone could throw would be very much appreciated.
I ran into the exact same problem while using herokuapp.com.
I even posted a question on SO here.
According to Heroku documentation:
In other words, in browsers that support the functionality, applications in the herokuapp.com domain are prevented from setting cookies for *.herokuapp.com
Heroku blocks cookies from frontend.herokuapp.com and backend.herokuapp.com
You need to add a custom domain to frontend.herokuapp.com and backend.herokuapp.com
The entire answer https://stackoverflow.com/a/54513216/1501643
Recently I installed Graphite to monitor my application.
I installed it on an external (cloud) server and used the following configuration:
local_settings.py:
USE_REMOTE_USER_AUTHENTICATION = True
DASHBOARD_REQUIRE_AUTHENTICATION = True
DASHBOARD_REQUIRE_PERMISSIONS = True
Everything works fine, and I can send data using Graphite plugins, and read the data using browser, except that the metrics are public and it is not required to login.
Also I used htaccess but it is not good enough (because there is no simple way to implement logout in Graphite)
I couldn't find how to enforce user authentication before presenting data.
I'm getting blank pages when navigating the WSO2 ESB-4.9.0 management console. For example, the registry, templates, endpoints, and local entries pages are all blank when navigating to them in the console UI.
I've found the following errors in the logs:
Error during rendering
IO Error executing tag: JSPException while including path '/templates/list_templates.jsp'. ServletException while including page.
The ESB is running in a YAJSW Windows Service. I should note that the ESB runs fine when running straight from the command line, it's just when using the service wrapper.
Strainy
Since you mentioned that the ESB starts as a window service, in carbon 4.4.x, default wrapper.conf file needs to be updated with following additional entries.
wrapper.java.additional.26 = -Dwso2.carbon.xml=${carbon_home}\\repository\\conf\\carbon.xml
wrapper.java.additional.27 = -Dwso2.registry.xml=${carbon_home}\\repository\\conf\\registry.xml
wrapper.java.additional.28 = -Dwso2.user.mgt.xml=${carbon_home}\\repository\\conf\\user-mgt.xml
wrapper.java.additional.29 = -Dwso2.transports.xml=${carbon_home}\\repository\\conf\\mgt-transports.xml
wrapper.java.additional.31 = -Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false
wrapper.java.additional.33 = -Dfile.encoding=UTF8
You can verify these configurations in wrapper.conf and these configurations may help you to solve the jsp error.
Following link may help you if you need more information regarding this.
https://docs.wso2.com/display/ESB490/Installing+as+a+Windows+Service#InstallingasaWindowsService-SettinguptheYAJSWwrapperconfigurationfile
I just used the NSSM - the "Non-Sucking Service Manager".
It's actually amazingly simple to install a Service using this tool.
https://nssm.cc
Just set it up to point at the wso2server.bat file
Keeping an eye on this issue however: https://wso2.org/jira/browse/ESBJAVA-4342
I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.
The AWS command line tools appear to be broken on both Linux (Ubuntu PP) and Windows (7). In both cases, after setting up the login credentials correctly and trying to run the most basic tool (getBalance.sh), I get a failure to authenticate.
An error occurred while fetching your balance: This request must be made over a secure channel. You must use 'https' rather than 'http'.
Seems simple enough, but there's nothing in the manual nor in the installed directory which would suggest that this is an option supported by the command line tools.
Has someone already modified the shell scripts to use a secured connection? If not, any clues as to where I should begin the modifications?
I haven't used the tools extensively so can't say this solution is extensively tested but getBalance.sh worked after doing this
See this thread:
https://forums.aws.amazon.com/message.jspa?messageID=333485
From the link:
Edit the Command Line Tools Installation Directory\bin\mturk.properties file and edit the service_url to use https instead of http – i.e. https://mechanicalturk.amazonaws.com/?Service=AWSMechanicalTurkRequester for production, and https://mechanicalturk.sandbox.amazonaws.com/?Service=AWSMechanicalTurkRequester for sandbox.