I have a local development django setup with apache. The problem is that on the deployment server there is no proxy while at my workplace I work behind a http proxy, hence the request calls fail.
Is there any way of making all calls from requests library go via proxy. [ I know how to add proxy to individual calls using the proxies parameter but is there a global solution ? ]
I got the same error reported by AmrFouad. At last, it fixed by updating wsgi.py as follows:
os.environ['http_proxy'] = "http://proxy.xxx:8080"
os.environ['https_proxy'] = "http://proxy.xxx:8080"
Add following lines in your wsgi file.
import os
http_proxy = "10.10.1.10:3128"
https_proxy = "10.10.1.11:1080"
ftp_proxy = "10.10.1.10:3128"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
os.environ["PROXIES"] = proxyDict
And Now you can use this environment variable anywhere you want,
r = requests.get(url, headers=headers, proxies=os.environ.get("PROXIES"))
P.S. - You should have a look at following links
Official Python Documentation for Environment Variables
Where and how do I set an environmental variable using mod-wsgi and django?
Python ENVIRONMENT variables
UPDATE 1
You can do something like following so that proxy settings are only being used on localhost.
import socket
if socket.gethostname() == "localhost":
# do something only on local server, e.g. setting os.environ["PROXIES"]
os.environ["PROXIES"] = proxyDict
else:
# Set os.environ["PROXIES"] to an empty dictionary on other hosts
os.environ["PROXIES"] = {}
Related
I am getting the following error when trying to use pyvmomi to get a list of VMs from the vcenter server appliance.
pyVmomi.VmomiSupport.vim.fault.NoPermission: (vim.fault.NoPermission) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'Permission to perform this operation was denied.',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
object = 'vim.Folder:group-d1',
privilegeId = 'System.View',
missingPrivileges = (vim.fault.NoPermission.EntityPrivileges) [
(vim.fault.NoPermission.EntityPrivileges) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
entity = 'vim.Folder:group-d1',
privilegeIds = (str) [
'System.View'
]
}
]
}
This is my python code :
import atexit
import ssl
from pyVim import connect
from pyVmomi import vim
import pdb
def vconnect(hostIP,port=None):
if (True):
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE # disable our certificate checking for lab
else:
context = ssl.create_default_context()
context.options |= ssl.OP_NO_TLSv1_3
#cipher = 'DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-GCM-SHA256'
#context.set_ciphers(cipher)
pdb.set_trace()
if (port):
service_instance = connect.SmartConnect(host=str(hostIP), # build python connection to vSphere
user="root",
pwd="HagsLoff#1324",
port=port,
sslContext=context)
else:
service_instance = connect.SmartConnect(host=str(hostIP), # build python connection to vSphere
user="root",
pwd="HagsLoff#1324",
sslContext=context)
atexit.register(connect.Disconnect, service_instance) # build disconnect logic
content = service_instance.RetrieveContent()
container = content.rootFolder # starting point to look into
viewType = [vim.VirtualMachine] # object types to look for
recursive = True # whether we should look into it recursively
containerView = content.viewManager.CreateContainerView(container, viewType, recursive) # create container view
children = containerView.view
for child in children: # for each statement to iterate all names of VMs in the environment
summary = child.summary
print(summary.config.name)
# connecting to ESX host
vconnect("192.168.160.160")
# connecting to vcsa VM
vconnect("192.168.160.170", 443)
So I am using a nested ESX that runs on my workstation 16. I have deployed the vcsa on this ESX host via the windows CLI installer. Querying the ESX host works fine whereas querying the vcenter server appliance (vcsa) gives me the above error.
I looked at this discussion which talks about setting 'global permissions'; however on my vcenter server management VM, my 'administration' tab does not look anything like this:
What it instead looks like is this:
So apparently I have a 'vcenter server management' appliance and not what is referred to as the 'vsphere client'.
So with this context set, I have some questions:
Is the error above due to my trial license?
How is the 'vcenter server management (vcsa)' appliance different from the 'vsphere client'?
Is it possible to change 'global permissions' on the vcsa or do I need to get the 'vsphere client' to do that?
I tried adding the default port (443) as mentioned here to no avail. Keen to hear from you soon
I am new to Django. I was trying to implement Redis cache system into my Django project. I am using AWS free tier to host my Django project on EC2 machine using gunicorn web server and trying to integrate AWS Redis Cache. I have added below entry in my settings.py file:
CACHE = {
'default': {
'BACKEND' : "redis_cache.cache.RedisCache",
'LOCATION' : "redis://xxx.xxx.xxxxx.cache.amazonaws.com/1",
'OPTIONS' : {
'CLIENT_CLASS' : 'redis_cache.client.DefaultClient',
},
}
}
And below is my view function:
def usertable(request):
obj = userdetails.objects.get(id=1)
name = obj.name
if cache.get(name):
cache_name = cache.get(name)
print ("From CACHE")
else:
cache_name = obj.name
cache.set(name, cache_name)
print ("*****************FROM DB********************")
context = {
'name' : cache_name,
}
This code is working for me and I can see From CACHE printed in my terminal. But the key value pair which is set if I manually connect to redis using below cli tool:
redis-cli -h xx.xx.xxxxx…cache.amazonaws.com -p 6379 -n 1
on giving keys * I do not see any key value pair is set.
I am not sure if this is the correct way to test integration of Redis cache. Kindly advice if anyone had tried Redis Cache system.
I want my Django project to be accessible at many different endpoints. For one app, I want it accessible at app.domain.com and for another app I want it accessible at dashboard.domain.com. How can I achieve this? I am using AWS Elastic Beanstalk and Route 53.
I tried looking at Django's djangoproject.com and their Github repo, as they do this. However, I couldn't figure it out. Thanks!
You can define two settings.py file, with two associated urls.py files :
app_settings.py
from my_project.settings import *
ROOT_URLCONF = 'my_project.app_urls'
ALLOWED_HOSTS = ['app.domain.com']
dashboard_settings.py
from my_project.settings import *
ROOT_URLCONF = 'my_project.dashboard_urls'
ALLOWED_HOSTS = ['dashboard.domain.com']
Define your urls for each website respectively in my_project/app_urls.py and my_project/dashboard_urls.py
Then start two instances of your django project (with uwsgi, gunicorn ou whatever you use) with those two distinct settings files (using DJANGO_SETTINGS_MODULE environment variable for example).
This way, both instances shares the same codebase but exposes distinct urls.
For example, using uwsgi, you could have those two files (with distinct ports) :
app.ini
[uwsgi]
http = 127.0.0.1:8001
module = my_project.wsgi
processes = 4
threads = 2
pidfile = app.pid
env = DJANGO_SETTINGS_MODULE=my_project.app_settings
dashboard.ini
[uwsgi]
http = 127.0.0.1:8002
module = my_project.wsgi
processes = 4
threads = 2
pidfile = app.pid
env = DJANGO_SETTINGS_MODULE=my_project.dashboard_settings
I have deployed my django web application on my institute server using apache and mod_wsgi and I am using django-allauth google authentication. My institute network uses few proxy servers to interact with the Internet.
Google authentication works fine while I am running app on localhost, but as soon as I migrate the app to https_://fusion.*******.ac.in, google authentication shows following
Error image
callback uri: https_://fusion.*******.ac.in/accounts/google/login/callback/
Please help me with this problem.
Add following lines in your wsgi file.
import os
http_proxy = "host:port"
https_proxy = "host:port"
ftp_proxy = "host:port"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
os.environ["PROXIES"] = proxyDict
I created new django project;
added to my settings.py:
DEBUG = False
ALLOWED_HOSTS= [
'localhost',
'my_site.com'
]
created app test_view;
added hello_world to test_view.views
from django.http.response import HttpResponse
def hello_world(request):
return HttpResponse('Hello World!!!')
added test route to urls.py url(r'test/', 'test_view.views.hello_world');
fixed /etc/hosts
127.0.0.1 localhost my_site.com
Now when i'm trying to access http://my_site.com:8000/test/ django returns Bad Request (400). But when url is http://localhost:8000/test/ I can see my Hello World page. What can be wrong?
UPD:
The same result with DEBUG = True
UPD2:
One more working hostname is ubuntu-virtualbox (computer's name).
But even when I changed computer's name to my_site, ubuntu-virtualbox was still available and my_site returned Bad Request (400)
May it be because of some system settings? (it's clean ubuntu in virtualbox).
Or maybe problem in virtualenv? Is there a way to trace the error?
It might be a bad Cookie. Try deleting them.
It looks like django can see if request isn't passed through dns server. Installation and configuration of bind9 instead of changes in /etc/hosts solved this problem.
You need another line in your hosts file.
127.0.0.1 localhost
127.0.0.1 my_site.com
Then in your ALLOWED_HOSTS...
ALLOWED_HOSTS = [
'localhost',
'.my_site.com', # not 'my_site.com'
]
ALSO, and this is probably important seeing as you are running your site from a virtual machine, when you run the site with python manage.py runserver, run it like this...
python manage.py runserver virtual.server.ip.address:8000
Obviously replace 'virtual.server.ip.address' with that virtual machines actual ip address.
I print *DEBUG = None* and my django works.