Test and Verify AWS Redis Integration with Django project - django

I am new to Django. I was trying to implement Redis cache system into my Django project. I am using AWS free tier to host my Django project on EC2 machine using gunicorn web server and trying to integrate AWS Redis Cache. I have added below entry in my settings.py file:
CACHE = {
'default': {
'BACKEND' : "redis_cache.cache.RedisCache",
'LOCATION' : "redis://xxx.xxx.xxxxx.cache.amazonaws.com/1",
'OPTIONS' : {
'CLIENT_CLASS' : 'redis_cache.client.DefaultClient',
},
}
}
And below is my view function:
def usertable(request):
obj = userdetails.objects.get(id=1)
name = obj.name
if cache.get(name):
cache_name = cache.get(name)
print ("From CACHE")
else:
cache_name = obj.name
cache.set(name, cache_name)
print ("*****************FROM DB********************")
context = {
'name' : cache_name,
}
This code is working for me and I can see From CACHE printed in my terminal. But the key value pair which is set if I manually connect to redis using below cli tool:
redis-cli -h xx.xx.xxxxx…cache.amazonaws.com -p 6379 -n 1
on giving keys * I do not see any key value pair is set.
I am not sure if this is the correct way to test integration of Redis cache. Kindly advice if anyone had tried Redis Cache system.

Related

Unable to query VMs on vcenter server appliance using pyvmomi

I am getting the following error when trying to use pyvmomi to get a list of VMs from the vcenter server appliance.
pyVmomi.VmomiSupport.vim.fault.NoPermission: (vim.fault.NoPermission) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'Permission to perform this operation was denied.',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
object = 'vim.Folder:group-d1',
privilegeId = 'System.View',
missingPrivileges = (vim.fault.NoPermission.EntityPrivileges) [
(vim.fault.NoPermission.EntityPrivileges) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
entity = 'vim.Folder:group-d1',
privilegeIds = (str) [
'System.View'
]
}
]
}
This is my python code :
import atexit
import ssl
from pyVim import connect
from pyVmomi import vim
import pdb
def vconnect(hostIP,port=None):
if (True):
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE # disable our certificate checking for lab
else:
context = ssl.create_default_context()
context.options |= ssl.OP_NO_TLSv1_3
#cipher = 'DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-GCM-SHA256'
#context.set_ciphers(cipher)
pdb.set_trace()
if (port):
service_instance = connect.SmartConnect(host=str(hostIP), # build python connection to vSphere
user="root",
pwd="HagsLoff#1324",
port=port,
sslContext=context)
else:
service_instance = connect.SmartConnect(host=str(hostIP), # build python connection to vSphere
user="root",
pwd="HagsLoff#1324",
sslContext=context)
atexit.register(connect.Disconnect, service_instance) # build disconnect logic
content = service_instance.RetrieveContent()
container = content.rootFolder # starting point to look into
viewType = [vim.VirtualMachine] # object types to look for
recursive = True # whether we should look into it recursively
containerView = content.viewManager.CreateContainerView(container, viewType, recursive) # create container view
children = containerView.view
for child in children: # for each statement to iterate all names of VMs in the environment
summary = child.summary
print(summary.config.name)
# connecting to ESX host
vconnect("192.168.160.160")
# connecting to vcsa VM
vconnect("192.168.160.170", 443)
So I am using a nested ESX that runs on my workstation 16. I have deployed the vcsa on this ESX host via the windows CLI installer. Querying the ESX host works fine whereas querying the vcenter server appliance (vcsa) gives me the above error.
I looked at this discussion which talks about setting 'global permissions'; however on my vcenter server management VM, my 'administration' tab does not look anything like this:
What it instead looks like is this:
So apparently I have a 'vcenter server management' appliance and not what is referred to as the 'vsphere client'.
So with this context set, I have some questions:
Is the error above due to my trial license?
How is the 'vcenter server management (vcsa)' appliance different from the 'vsphere client'?
Is it possible to change 'global permissions' on the vcsa or do I need to get the 'vsphere client' to do that?
I tried adding the default port (443) as mentioned here to no avail. Keen to hear from you soon

django problem with shared memcache nodes

i have two django instances running on two servers and i am using memcached to cache some data in my applicationa.
each server have it's own memcached installed, i want to both of my applications have access to both caches but i cant't. when i set a values from one application in cache other application cant access it
my memcached instances are running as root, also i have tried memcache and other users but it didn't fix the problem.
for testing i used django shell, import cache class:
from django.core.cache import cache
set a value in cache :
cache.set('foo', 'bar', 3000)
and tried to get value from my other Django instance :
cache.get('foo')
but it returns nothing!
here is my settings.py file :
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': [
'first app server ip:port',
'second app server ip:port']
}
}
and my memcached.conf(comments deletede):
-d
logfile /var/log/memcache/memcached.log
# -v
-vv
-m 512
-p 11211
-u root
-l 192.168.174.160
# -c 1024
# -k
# -M
# -r
-P /var/run/memcached/memcached.pid
The order of location in settings must be the same in all servers. Could you please check if they are same?

Django, Apache2 on Google Kubernetes Engine writing Opencensus Traces to Stackdriver Trace

I have a Django web app served from Apache2 with mod_wsgi in docker containers running on a Kubernetes cluster in Google Cloud Platform, protected by Identity-Aware Proxy. Everything is working great, but I want to send GCP Stackdriver traces for all requests without writing one for each view in my project. I found middleware to handle this, using Opencensus. I went through this documentation, and was able to manually generate traces that exported to Stackdriver Trace in my project by specifying the StackdriverExporter and passing the project_id parameter as the Google Cloud Platform Project Number for my project.
Now to make this automatic for ALL requests, I followed the instructions to set up the middleware. In settings.py, I added the module to INSTALLED_APPS, MIDDLEWARE, and set up the OPENCENSUS_TRACE dictionary of options. I also added the OPENCENSUS_TRACE_PARAMS. This works great with the default exporter 'opencensus.trace.exporters.print_exporter.PrintExporter', as I can see the Trace and Span information, including Trace ID and all details in my Apache2 web server logs. However, I want to send these to my Stackdriver Trace processor for analysis.
I tried setting the EXPORTER parameter to opencensus.trace.exporters.stackdriver_exporter.StackdriverExporter, which works when run manually from the shell, as long as you supply the project number.
When it is set up to use StackdriverExporter, the web page will not respond load, the health check starts to fail, and ultimately the web page comes back with a 502 error, stating I should try again in 30 seconds (I believe the Identity-Aware Proxy is generating this error, once it detects the failed health check), but the server generates no errors, and there are no logs in access or errors for Apache2.
There is another dictionary in settings.py named OPENCENSUS_TRACE_PARAMS, which I presume is needed to determine which project number the exporter should be using. The example has GCP_EXPORTER_PROJECT set as None, and SERVICE_NAME set as 'my_service'.
What options do I need to set to get the exporter to send back to Stackdriver instead of printing to logs? Do you have any idea about how I can set this up?
settings.py
MIDDLEWARE = (
...
'opencensus.trace.ext.django.middleware.OpencensusMiddleware',
)
INSTALLED_APPS = (
...
'opencensus.trace.ext.django',
)
OPENCENSUS_TRACE = {
'SAMPLER': 'opencensus.trace.samplers.probability.ProbabilitySampler',
'EXPORTER': 'opencensus.trace.exporters.stackdriver_exporter.StackdriverExporter', # This one just makes the server hang with no response or error and kills the health check.
'PROPAGATOR': 'opencensus.trace.propagation.google_cloud_format.GoogleCloudFormatPropagator',
# 'EXPORTER': 'opencensus.trace.exporters.print_exporter.PrintExporter', # This one works to print the Trace and Span with IDs and details in the logs.
}
OPENCENSUS_TRACE_PARAMS = {
'BLACKLIST_PATHS': ['/health'],
'GCP_EXPORTER_PROJECT': 'my_project_number', # Should this be None like the example, or Project ID, or Project Number?
'SAMPLING_RATE': 0.5,
'SERVICE_NAME': 'my_service', # Not sure if this is my app name or some other service name.
'ZIPKIN_EXPORTER_HOST_NAME': 'localhost', # Are the following even necessary, or are they causing a failure that is not detected by Apache2?
'ZIPKIN_EXPORTER_PORT': 9411,
'ZIPKIN_EXPORTER_PROTOCOL': 'http',
'JAEGER_EXPORTER_HOST_NAME': None,
'JAEGER_EXPORTER_PORT': None,
'JAEGER_EXPORTER_AGENT_HOST_NAME': 'localhost',
'JAEGER_EXPORTER_AGENT_PORT': 6831
}
Here's an example (I prettified the format for readability) of the Apache2 log when it is set to use the PrintExporter:
[Fri Feb 08 09:00:32.427575 2019]
[wsgi:error]
[pid 1097:tid 139801302882048]
[client 10.48.0.1:43988]
[SpanData(
name='services.views.my_view',
context=SpanContext(
trace_id=e882f23e49e34fc09df621867d753532,
span_id=None,
trace_options=TraceOptions(enabled=True),
tracestate=None
),
span_id='bcbe7b96906a482a',
parent_span_id=None,
attributes={
'http.status_code': '200',
'http.method': 'GET',
'http.url': '/',
'django.user.name': ''
},
start_time='2019-02-08T17:00:29.845733Z',
end_time='2019-02-08T17:00:32.427455Z',
child_span_count=0,
stack_trace=None,
time_events=[],
links=[],
status=None,
same_process_as_parent_span=None,
span_kind=1
)]
Thanks in advance for any tips, assistance, or troubleshooting advice!
Edit 2019-02-08 6:56 PM UTC:
I found this in the middleware:
# Initialize the exporter
transport = convert_to_import(settings.params.get(TRANSPORT))
if self._exporter.__name__ == 'GoogleCloudExporter':
_project_id = settings.params.get(GCP_EXPORTER_PROJECT, None)
self.exporter = self._exporter(
project_id=_project_id,
transport=transport)
elif self._exporter.__name__ == 'ZipkinExporter':
_service_name = self._get_service_name(settings.params)
_zipkin_host_name = settings.params.get(
ZIPKIN_EXPORTER_HOST_NAME, 'localhost')
_zipkin_port = settings.params.get(
ZIPKIN_EXPORTER_PORT, 9411)
_zipkin_protocol = settings.params.get(
ZIPKIN_EXPORTER_PROTOCOL, 'http')
self.exporter = self._exporter(
service_name=_service_name,
host_name=_zipkin_host_name,
port=_zipkin_port,
protocol=_zipkin_protocol,
transport=transport)
elif self._exporter.__name__ == 'TraceExporter':
_service_name = self._get_service_name(settings.params)
_endpoint = settings.params.get(
OCAGENT_TRACE_EXPORTER_ENDPOINT, None)
self.exporter = self._exporter(
service_name=_service_name,
endpoint=_endpoint,
transport=transport)
elif self._exporter.__name__ == 'JaegerExporter':
_service_name = self._get_service_name(settings.params)
self.exporter = self._exporter(
service_name=_service_name,
transport=transport)
else:
self.exporter = self._exporter(transport=transport)
The exporter is now named StackdriverExporter, instead of GoogleCloudExporter. I set up a class in my app named GoogleCloudExporter that inherits StackdriverExporter, and updated my settings.py to use GoogleCloudExporter, but it didn't seem to work, I wonder if there is other code referencing these old naming schemes, possibly for the transport. I'm searching the source code for clues... This at least tells me I can get rid of the ZIPKIN and JAEGER param options, as this is determined on the EXPORTER param.
Edit 2019-02-08 11:58 PM UTC:
I scrapped Apache2 to isolate the problem and just set my docker image to use Django's built in webserver CMD ["python", "/path/to/manage.py", "runserver", "0.0.0.0:80"] and it works! When I go to the site, it writes traces to Stackdriver Trace for each request, the Span name is the module and method being executed.
Somehow Apache2 is not being allowed to send these, but I can do so from the shell when running as root. I'm adding Apache2 and mod-wsgi tags to the question, because I have a funny feeling this has to do with forking child processes in Apache2 and mod-WSGI. Would it be the child process being unable to be created as apache2's child process is sandboxed, or could this be a permissions thing? It seems strange, because it is just calling python modules, no external system OS binaries, that I am aware of. Any other ideas would be greatly appreciated!
I had this problem while using gunicorn with gevent as the worker class. To resolve and get cloud traces working the solution was to monkey patch grpc like so
from gevent import monkey
monkey.patch_all()
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()
See https://github.com/grpc/grpc/issues/4629#issuecomment-376962677

Deployment of Django application using MongoDB on AWS

How to define the settings of the django application for using the mongodb server running on the same instance as that of the django project. I tried it with 127.0.0.1, port 27017 (which I assume is the default port at which the mongodb server runs), in the settings of the django application. I then tried it with the IP address of the aws instance, but with no luck. It always gives me this error:
ConnectionError: You have not defined a default connection
My django project has the following mongo settings.
MONGO_SETTINGS = {
'DB_NAME' : 'spotmentor',
'HOST' : '127.0.0.1',
'PORT' : 27017,
'USERNAME' : '',
'PASSWORD' : ''
}
Then I used the mongoengine connect to establish the connection.
I am importing the above MONGO_SETTINGS as mongoset and
from mongoengine import connect
connect(mongoset.get('DB_NAME'), host = mongoset.get('HOST'), port = mongoset.get('PORT'), username = mongoset.get('USERNAME'), password = mongoset.get('PASSWORD'))
I changed the value of the HOST key to the aws instance public IP and still I got the same ConnectionError.
I have also defined:
DATABASES = {
'default' : {
'ENGINE': 'django.db.backends.dummy',
}
}
How can I resolve this?
mongoengine does not require any extra settings to connect to mongodb. The settings that you have provided must suffice.
I suggest you re-check your installation of mongodb.
Try sudo apt-get remove mongodb
and sudo apt-get install mongodb
This should solve your problem.
Also, you need not define dummy db backend if you are not using sql-databases.

Working with django : Proxy setup

I have a local development django setup with apache. The problem is that on the deployment server there is no proxy while at my workplace I work behind a http proxy, hence the request calls fail.
Is there any way of making all calls from requests library go via proxy. [ I know how to add proxy to individual calls using the proxies parameter but is there a global solution ? ]
I got the same error reported by AmrFouad. At last, it fixed by updating wsgi.py as follows:
os.environ['http_proxy'] = "http://proxy.xxx:8080"
os.environ['https_proxy'] = "http://proxy.xxx:8080"
Add following lines in your wsgi file.
import os
http_proxy = "10.10.1.10:3128"
https_proxy = "10.10.1.11:1080"
ftp_proxy = "10.10.1.10:3128"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
os.environ["PROXIES"] = proxyDict
And Now you can use this environment variable anywhere you want,
r = requests.get(url, headers=headers, proxies=os.environ.get("PROXIES"))
P.S. - You should have a look at following links
Official Python Documentation for Environment Variables
Where and how do I set an environmental variable using mod-wsgi and django?
Python ENVIRONMENT variables
UPDATE 1
You can do something like following so that proxy settings are only being used on localhost.
import socket
if socket.gethostname() == "localhost":
# do something only on local server, e.g. setting os.environ["PROXIES"]
os.environ["PROXIES"] = proxyDict
else:
# Set os.environ["PROXIES"] to an empty dictionary on other hosts
os.environ["PROXIES"] = {}