why would perlbal's reproxying give me a 503 for any remote url?
X-REPROXY-URL: /path/to/a/local/file.jpg = working
X-REPROXy-URL: http://a-public-file-in-an-s3-bucket.jpg = HTTP 503
my perlbal conf looks like:
CREATE POOL test_pool
POOL test_pool ADD 127.0.0.1:8888
POOL test_pool ADD 127.0.0.1:8889
CREATE SERVICE balancer
SET listen = 0.0.0.0:80
SET role = reverse_proxy
SET pool = test_pool
SET persist_client = on
SET persist_backend = on
SET verify_backend = on
SET enable_reproxy = true
ENABLE balancer
and i know im setting the header properly, because, as i said, it works for local files and urls.
looks like perlbal doesn't deal well with urls like "bucket-name.s3.amazonaws.com". changing the url to "s3.amazonaws.com/bucket-name/" works.
Related
I am trying to configure Superset with multiple ldap servers, but at this moment, I was able to setup for only one server.
Any work around that can be done in the 'Config.py' to configure multiple servers at a same time??
I have given the following configuration in the ‘config.py’ file.
config.py - LDAP configs
AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Alpha"
AUTH_LDAP_SERVER = "ldap://ldap_example_server_one:389"
AUTH_LDAP_USE_TLS = False
AUTH_LDAP_BIND_USER = "CN=my_user,OU=my_users,DC=my,DC=domain"
AUTH_LDAP_BIND_PASSWORD = "mypassword"
AUTH_LDAP_SEARCH = "DC=my,DC=domain"
AUTH_LDAP_UID_FIELD = "sAMAccountName"
Note – It worked for ‘ldap_example_server_one:389’ server but when tried to add another server it threw an Configuration failure error.
You can't use multiple LDAP servers with default LDAP authenticator from Flask Appbuilder. You have to implement your own custom security manager which will be able to operate as many LDAP servers as you want.
At first, you should create new file, e.g. my_security_manager.py. Put these lines into it:
from superset.security import SupersetSecurityManager
class MySecurityManager(SupersetSecurityManager):
def __init__(self, appbuilder):
super(MySecurityManager, self).__init__(appbuilder)
Secondly, you should let Superset know that you want to use your brand new security manager. To do so, add these lines to your Superset configuration file (superset_config.py):
from my_security_manager import MySecurityManager
CUSTOM_SECURITY_MANAGER = MySecurityManager
Here is additional information on the topic.
I'm testing cookiecutter-django in production using Docker-compose and Traefik with Let'sencrypt. I'm trying to configure it to work with 2 domains (mydomain1.com and mydomain2.com) using Django sites.
How to configure Traefik so it could forward traffic to necessary domain?
This is my traefik.toml
logLevel = "INFO"
defaultEntryPoints = ["http", "https"]
# Entrypoints, http and https
[entryPoints]
# http should be redirected to https
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
# https is the default
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# Enable ACME (Let's Encrypt): automatic SSL
[acme]
# Email address used for registration
email = "mail#mydomain1.com"
storage = "/etc/traefik/acme/acme.json"
entryPoint = "https"
onDemand = false
OnHostRule = true
# Use a HTTP-01 acme challenge rather than TLS-SNI-01 challenge
[acme.httpChallenge]
entryPoint = "http"
[file]
[backends]
[backends.django]
[backends.django.servers.server1]
url = "http://django:5000"
[frontends]
[frontends.django]
backend = "django"
passHostHeader = true
[frontends.django.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django.routes.dr1]
rule = "Host:mydomain1.com"
Now all domains working through ssl,but I can see only mydomain1.com, and mydomain2.com shows ERR_TOO_MANY_REDIRECTS.
What have you tried? What didn't work? By reading your question it's hard to tell.
There is an element of answer in the issue you seem to have opened in cc-django repo.
First things first, I would try to take Traefik out of the equation and make this work locally by doing something as suggested. Once it works locally, it's a matter of mapping the right port/container to the right domain in Traefik.
Assuming you've configure docker-compose to run the django containers on port 5000 and 5001, I think you would need to adjust you backends and frontends section as below:
[backends]
[backends.django1]
[backends.django1.servers.server1]
url = "http://django:5000"
[backends.django2]
[backends.django2.servers.server1]
url = "http://django:5001"
[frontends]
[frontends.django1]
backend = "django1"
passHostHeader = true
[frontends.django1.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django1.routes.dr1]
rule = "Host:mydomain1.com"
[frontends.django2]
backend = "django2"
passHostHeader = true
[frontends.django2.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django2.routes.dr1]
rule = "Host:mydomain2.com"
I didn't try these, but that would be the first thing I would do. Also, it looks like we can specify rules on frontends to adjust routing.
I am trying to create an app using Flask with more than 9 controllers, some of them are in a different subdomains.
I am using Flask_Login to allow users to login, the users controller exist in a separated subdomain, the problem happens if i visited that subdomain, inside my console it shows a redirect to login the user first to access that subdomain, and inside cookies i can't see the remember_me token.
Here are the configurations for the extension:
SERVER_NAME = 'localhost:5000'
# Login configurations
REMEMBER_COOKIE_DURATION = timedelta(seconds=7*24*60*60)
REMEMBER_COOKIE_NAME = 'myapp.remember'
REMEMBER_COOKIE_SECURE = True
REMEMBER_COOKIE_HTTPONLY = True
REMEMBER_COOKIE_REFRESH_EACH_REQUEST = True
REMEMBER_COOKIE_DOMAIN = '.localhost:5000'
from .controllers.client import client_route
app.register_blueprint(client_route, subdomain='client')
The domain inside cookies is localhost, how can i change it to something like .localhost ??
You need to set REMEMBER_COOKIE_SECURE in your config to False
According to Flask-Login's documentation:
Restricts the “Remember Me” cookie’s scope to secure channels
(typically HTTPS).
It only sets the cookie on HTTPS version of your application
I am testing out deploying my Django application into AWS's Fargate Service.
Everything seems to run, but I am getting Health Check errors as the Application Load Balancer is sending requests to my Django application using the Local Ip of the host. This give me an Allowed Host error in the logs.
Invalid HTTP_HOST header: '172.31.86.159:8000'. You may need to add '172.31.86.159' to ALLOWED_HOSTS
I have tried getting the Local ip at task start up time and appending it to my ALLOWED_HOSTS, but this fails under Fargate:
import requests
EC2_PRIVATE_IP = None
try:
EC2_PRIVATE_IP = requests.get('http://169.254.169.254/latest/meta-data/local-ipv4', timeout = 0.01).text
except requests.exceptions.RequestException:
pass
if EC2_PRIVATE_IP:
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Is there a way to get the ENI IP Address so I can append it to ALLOWED_HOSTS?
Now this works, and it lines up with the documentation, but I don't know if it's the BEST way or if there is a BETTER WAY.
My containers are running under the awsvpc network mode.
https://aws.amazon.com/blogs/compute/under-the-hood-task-networking-for-amazon-ecs/
...the ECS agent creates an additional "pause" container for each task before starting the containers in the task definition. It then sets up the network namespace of the pause container by executing the previously mentioned CNI plugins. It also starts the rest of the containers in the task so that they share their network stack of the pause container. (emphasis mine)
I assume the
so that they share their network stack of the pause container
Means we really just need the IPv4 Address of the pause container. In my non-exhaustive testing it appears this is always Container[0] in the ECS meta: http://169.254.170.2/v2/metadata
With those assumption in play this does work, though I don't know how wise it is to do:
import requests
EC2_PRIVATE_IP = None
METADATA_URI = os.environ.get('ECS_CONTAINER_METADATA_URI', 'http://169.254.170.2/v2/metadata')
try:
resp = requests.get(METADATA_URI)
data = resp.json()
# print(data)
container_meta = data['Containers'][0]
EC2_PRIVATE_IP = container_meta['Networks'][0]['IPv4Addresses'][0]
except:
# silently fail as we may not be in an ECS environment
pass
if EC2_PRIVATE_IP:
# Be sure your ALLOWED_HOSTS is a list NOT a tuple
# or .append() will fail
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Of course, if we pass in the container name that we must set in the ECS task definition, we could do this too:
import os
import requests
EC2_PRIVATE_IP = None
METADATA_URI = os.environ.get('ECS_CONTAINER_METADATA_URI', 'http://169.254.170.2/v2/metadata')
try:
resp = requests.get(METADATA_URI)
data = resp.json()
# print(data)
container_name = os.environ.get('DOCKER_CONTAINER_NAME', None)
search_results = [x for x in data['Containers'] if x['Name'] == container_name]
if len(search_results) > 0:
container_meta = search_results[0]
else:
# Fall back to the pause container
container_meta = data['Containers'][0]
EC2_PRIVATE_IP = container_meta['Networks'][0]['IPv4Addresses'][0]
except:
# silently fail as we may not be in an ECS environment
pass
if EC2_PRIVATE_IP:
# Be sure your ALLOWED_HOSTS is a list NOT a tuple
# or .append() will fail
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Either of these snippets of code would then in in the production settings for Django.
Is there a better way to do this that I am missing? Again, this is to allow the Application Load Balancer health checks. When using ECS (Fargate) the ALB sends the host header as the Local IP of the container.
In fargate, there is an environment variable injected by the AWS container agent:${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
Putting this into python, you get
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
An alternative solution to this is to create a middleware that bypasses the ALLOWED_HOSTS check just for your healthcheck endpoint, eg
from django.http import HttpResponse
from django.utils.deprecation import MiddlewareMixin
class HealthEndpointMiddleware(MiddlewareMixin):
def process_request(self, request):
if request.META["PATH_INFO"] == "/health/":
return HttpResponse("OK")
I solved this issue by doing this:
First i've installed this middleware that can handle CIDR masks on top of ALLOWED_HOSTS: https://github.com/mozmeao/django-allow-cidr
With this middleware i can use a env var like this:
ALLOWED_CIDR_NETS = ['192.168.1.0/24']
So you need to find out the subnets you had configured on your ECS Service Definition, for me it was: 10.3.112.0/24 and 10.3.111.0/24.
You add that to your ALLOWED_CIDR_NETS and you're good to go.
I have just pushed a web app into production and requests to my nodejs no longer contain the user cookie that Django has been setting by default on my localhost (where it was working).
my nodejs looks for the cookie like this
io.configure(function(){
io.set('authorization', function(data, accept){
if (data.headers.cookie) {
data.cookie = cookie_reader.parse(data.headers.cookie);
return accept(null, true);
}
return accept('error',false);
});
io.set('log level',1);
});
and on localhost has been getting this
cookie: 'username="name:1V7yRg:n_Blpzr2HtxmlBOzCipxX9ZlJ9U"; password="root:1V7yRg:Dos81LjpauTABHrN01L1aim-EGA"; csrftoken=UwYBgHUWFIEEKleM8et1GS9FuUPEmgKF; sessionid=6qmyso9qkbxet4isdb6gg9nxmcnw4rp3' },
in the request header.
But in production, the header is the same but except no more cookie. Does Django only set this on localhost? How can I get it working in production?
I've tried setting these in my settings.py
CSRF_COOKIE_DOMAIN = '.example.com'
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = False
But so far no good.
Any insight would be great.
I just figured it out. I was making a request to nodejs on the client like this
Message.socket = io.connect('http://123.456.789.10:5000');
Where I used my respective IP address and port that my nodejs was listening on. This is considered cross domain so browsers won't include cookies in the request. Easy fix by changing it to
Message.socket = io.connect('http://www.mydomain.com:5000');