I'm having some trouble with my AWS Kubernetes instance.
I'm trying to get my django instances to connect to the RDS service via the DB endpoint.
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.environ['NAME'],
'USER': os.environ['USER'],
'PASSWORD': os.environ['PASSWORD'],
'HOST': os.environ['HOST'],
'PORT': os.environ['PORT']
}
}
The host string would resemble this service.key.region.rds.amazonaws.com and is being passed to the container via env in the deploy.yml
containers:
- name: service
env:
- name: HOST
value: service.key.region.rds.amazonaws.com
This set up works locally in kubernetes but not when I put it in the cluster I have on AWS. It returns the following error instead:
django.db.utils.OperationalError: could not translate host name
Any suggestions or am I missing something in how AWS likes handling things?
Assuming your AWS deployment is now in the same VPC as your RDS, then you will need to change your host to use the private IP.
Related
My django project runns normally on localhost and on heroku also, but when I deployed it to google cloud platform I am getting this error:
could not connect to server: Connection refused
Is the server running locally and accepting connections on Unix domain socket "/cloudsql/<connection_name>/.s.PGSQL.5432"?
The connection to the database in settings.py looks like this:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'database_name',
'USER': 'postgres',
'PASSWORD': "password",
# production
'HOST': '/cloudsql/connection_name',
'PORT': '5432',
}
Additionally, my app.yaml looks like
runtime: python37
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
env_variables:
DJANGO_SETTINGS_MODULE: fes_app.settings
requirements.txt looks like this plus
sqlparse==0.4.2
toml==0.10.2
uritemplate==4.1.1
urllib3==1.25.11
whitenoise==5.2.0
twilio==6.9.0
I have tried using the binary version of psycopg, also gave a role client sql to the service account in the cloud.
**NOTE : ** I am using an app engine standard environment
Likely need more information to help you out, but did you follow the tutorial below in the official Google Docs? That's usually how I get started and then I make modifications from there.
I would compare how Google is deploying a Django app to your own settings and see what's missing. For example, your requirements.txt file does not look complete (unless you only pasted part of it) so I would start there.
https://cloud.google.com/python/django/appengine
I want to use AWS Elastic-search service with my django application which is running on EC2 instance.
For that I use the settings -
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch5_backend.Elasticsearch5SearchEngine',
'URL': 'https://vpc-ES-CLUSTER.ap-south-1.es.amazonaws.com:9200/',
'INDEX_NAME': 'haystack',
'INCLUDE_SPELLING':True,
},
}
I am not even able to set the connection. Here I am getting this error -
raise ConnectionError('N/A', str(e), e)
elasticsearch.exceptions.ConnectionError: ConnectionError((, 'Connection to vpc-ES-CLUSTER.ap-south-1.es.amazonaws.com timed out. (connect timeout=10)')) caused by: ConnectTimeoutError((, 'Connection to vpc-ES-CLUSTER.ap-south-1.es.amazonaws.com timed out. (connect timeout=10)'))
I have updated the access policy to allow the user for edit and list, also in security group add the port 9200 TCP rule. How to connect ec2 with elastic search using VPC.
It is working on 443 port, use
'URL': 'https://vpc-ES-CLUSTER.ap-south-1.es.amazonaws.com:443/',
and in security groups add 443 open port.
I am trying to use my localhost database inside docker Django container.
I have allowed listen_address to all in postgresql.conf file.
I have added host all all localhost,192.168.1.9 trust in pg_hba.conf file.
192.168.1.9 is my en0 address.
Now i want to use 192.168.1.9 as my host in database setting in django.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'db_password',
'HOST': '192.168.1.9',
'PORT': '5432',
}
}
I am trying this but i am not able to succeed.
Am i doing something wrong? I want postgres to accept all connect so Django app container can connect to my local machine database.
I am trying
psql -h 192.168.1.9 -U db_user -d db_name
Getting psql: could not connect to server: Permission denied
Is the server running on host "192.168.1.9" and accepting
TCP/IP connections on port 5432?
I am not sure what wrong i am doing.
the docker containers are usually on separate network interface (docker0), so if your app wants to access your host you have to use the docker0 interface.
you can get the ip of host from your container with
/sbin/ip route|awk '/default/ { print $3 }'
Currently I am trying to deploy a Django app on Google App Engine(GAE). All goes well and app is deployed, but when it gets deployed, its connection with Postgres instance lost. I don't know why this happening. following is my settings.py file.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'dbname',
'USER': 'username',
'PASSWORD': 'password',
}
}
# In the flexible environment, you connect to CloudSQL using a unix socket.
# Locally, you can use the CloudSQL proxy to proxy a localhost connection
# to the instance
DATABASES['default']['HOST'] = '/cloudsql/shopnroar-175407:us-central1:snr-instance1'
if os.getenv('GAE_INSTANCE'):
pass
else:
DATABASES['default']['HOST'] = '100.107.126.241'
When i run it locally, it's making connection with google cloud Postgres as i have given ipv4 address to make connection, but as soon as i deploy it on GAE, following error comes while accessing database.
Is the server running locally and accepting
connections on Unix domain socket "/cloudsql/shopnroar-175407:us-central1:snr-instance1/.s.PGSQL.5432"?
here is my app.yaml
# [START runtime]
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT SNR.wsgi
env_variables:
# Replace user, password, database, and instance connection name with the values obtained
# when configuring your Cloud SQL instance.
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://amad.uddin:goingtoin1122#/shopnroar?host=/cloudsql/shopnroar-175407:us-central1:snr-instance1
beta_settings:
cloud_sql_instances: shopnroar-175407:us-central1:snr-instance1
runtime_config:
python_version: 2
# [END runtime]
Can anybody tell me how can i make connection with postgres instance after deploying django app on GAE?
Any help or suggestions will be highly appreciated.
Actually - forget my old answer - try turning the app.yaml values into strings, that helped me out:
cloud_sql_instances: 'shopnroar-175407:us-central1:snr-instance1'
I am hoping to use Amazon's Elasticsearch server to power a search of longtext fields in a Django database. However, I also don't want to expose this search to those who don't have a log in and don't want to rely on security through obscurity or some IP restriction tactic (unless it would work well with an existing heroku app, where the Django app is deployed).
Haystack seems to go a long way toward this, but there doesn't seem to be an easy way to configure it to use Amazon's IAM credentials to access the Elasticsearch service. This functionality does exist in elasticsearch-py, whichi it uses.
https://elasticsearch-py.readthedocs.org/en/master/#running-with-aws-elasticsearch-service
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
host = 'YOURHOST.us-east-1.es.amazonaws.com'
awsauth = AWS4Auth(YOUR_ACCESS_KEY, YOUR_SECRET_KEY, REGION, 'es')
es = Elasticsearch(
hosts=[{'host': host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection
)
print(es.info())
Regarding using HTTP authorization, I found this under issues at https://github.com/django-haystack/django-haystack/issues/1046
from urlparse import urlparse
parsed = urlparse('https://user:pass#host:port')
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': parsed.hostname,
'INDEX_NAME': 'haystack',
'KWARGS': {
'port': parsed.port,
'http_auth': (parsed.username, parsed.password),
'use_ssl': True,
}
}
}
I am wondering if there is a way to combine these two, something like the following (which, as expected, gives an error since it's more than just a user name and password):
from requests_aws4auth import AWS4Auth
awsauth = AWS4Auth([AACCESS_KEY],[SECRET_KEY],[REGION],'es')
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': [AWSHOST],
'INDEX_NAME': 'haystack',
'KWARGS': {
'port': 443,
'http_auth': awsauth,
'use_ssl': True,
'verify_certs': True
}
},
}
The error here:
TypeError at /admin/
must be convertible to a buffer, not AWS4Auth
Request Method: GET
Request URL: http://127.0.0.1:8000/admin/
Django Version: 1.7.7
Exception Type: TypeError
Exception Value:
must be convertible to a buffer, not AWS4Auth
Exception Location: /usr/lib/python2.7/base64.py in b64encode, line 53
Any ideas on how to accomplish this?
You are one step from success, add connection_class to KWARGS and everything should work as expected.
import elasticsearch
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': [AWSHOST],
'INDEX_NAME': 'haystack',
'KWARGS': {
'port': 443,
'http_auth': awsauth,
'use_ssl': True,
'verify_certs': True,
'connection_class': elasticsearch.RequestsHttpConnection,
}
},
}
AWS Identity and Access Management (IAM) allows you to manage users and user permissions for AWS services, to control which AWS resources users of AWS itself can access.
You cannot use IAM credentials to authorize users at the application level via http_auth, as it appears you are trying to do via Haystack here. They are different authentication schemes for different services. They are not compatible.
In your security use case, you have stated the need to 1) restrict access to your application, and 2) to secure the Elasticsearch service port from open access. These two requirements can be met using the following methods:
Restrict access to your application
I also don't want to expose this search to those who don't have a log in
For the front-end search app, you want to use a server level Basic access authentication (HTTP auth) configuration on the web server. This is where you want to control user login access to your app, via a standard http_auth username and password (again, not IAM). This will secure your app at the application level.
Secure the Elasticsearch service port
don't want to rely on security through obscurity or some
IP restriction tactic (unless it would work well with an existing
heroku app, where the Django app is deployed).
IP restriction is exactly what would work here, and consistent with AWS security best practices. You want to use security groups and security group rules as a firewall to control traffic for your EC2 instances.
Given a Haystack configuration of:
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
},
}
you will want to implement an IP restriction at the security group and/or ACL level on that IP and port 127.0.0.1, to restrict access from only your Django host or other authorize hosts. This will secure it from any unauthorized access at the service level.
In your implementation, the URL will likely resolve to a public or private IP, depending on your network architecture.