Hashicorp Vault - Django query from docker container - django

Good afternoon,
I have a two docker containers, one running a django app and the other running Hashicorp Vault as I am starting to play with Vault in a dev environment.
I am using HVAC from a django view to write a secret to the vault that is entered by a user to set up an integration to a REST API for a data pull.
When I run the following from my host machine, it writes just fine.
client_write = hvac.Client(url='http://127.0.0.1:8200', token='MY_TOKEN')
client_write.is_authenticated()
When I run the same from the Django container, it fails with:
requests.exceptions.ConnectionError:
HTTPConnectionPool(host='127.0.0.1', port=8200): Max retries exceeded
with url: /v1/auth/token/lookup-self (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7f2a21990610>: Failed to establish a new connection: [Errno 111]
Connection refused'))
Django docker container is running on localhost:8000 & the vault is localhost:8200. I also have a front end written in VueJS running on localhost:8080 that has no trouble communicating back and forth with the django rest API (django-rest-framework).
Is there something in vault that I need to list where the queries can come from?
EDIT: Also, I have used both my purpose built tokens with policies that allow writing of the secrets in question along with the following perms input (per https://github.com/hashicorp/vault/issues/781 ):
path "auth/token/lookup-self" {
capabilities = ["read"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
Furthermore, the same behavior occurs when testing with the root token and the purpose built tokens work from the host system.
Vault Config:
{
"listener": {
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": "true"
}
},
"backend": {
"file": {
"path": "/vault/file"
}
},
"default_lease_ttl": "240h",
"max_lease_ttl": "720h",
"ui": true,
"api_addr": "http://0.0.0.0:8200",
}
Thank you, I am very new to Vault and am struggling through it a bit.
BCBB

OK, so I neglected to provide enough relevant information in my first post due to my not understanding. Thanks to the reference to networking in compose in the comment above, I started down a path.
I realize now that I have each element in a different docker-compose: project_ui/docker-compose for the VueJS front end, project_api/ for the Django & Postgres, and then project_vault for the hashicorp vault container.
To enable these to talk, I followed the guidance here: Communication between multiple docker-compose projects
I created a network in the django app, and then linked the other containers to it as described in that answer.
Thanks.

Related

"Host header is specified and is not an IP address or localhost" message when using chromedp headless-shell

I'm trying to deploy chromedp/headless-shell to Cloud Run.
Here is my Dockerfile:
FROM chromedp/headless-shell
ENTRYPOINT [ "/headless-shell/headless-shell", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222", "--disable-gpu", "--headless", "--no-sandbox" ]
The command I used to deploy to Cloud Run is
gcloud run deploy chromedp-headless-shell --source . --port 9222
Problem
When I go to this path /json/list, I expect to see something like this
[{
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E",
"id": "B06F36A73E5F33A515E87C6AE4E2284E",
"title": "about:blank",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E"
}]
but instead, I get this error:
Host header is specified and is not an IP address or localhost.
Is there something wrong with my configuration or is Cloud Run not the ideal choice for deploying this?
This specific issue is not unique to Cloud Run. It originates from an existing change in the Chrome DevTools Protocol which generates this error when accessing it remotely. It could be attributed to security measures against some types of attacks. You can see the related Chromium pull request here.
I deployed a chromedp/headless-shell container to Cloud Run using your configuration and also received the same error. Now, there is this useful comment in a GitHub issue showing a workaround for this problem, by passing a HOST:localhost header. While this does work when I tested it locally, it does not work on Cloud Run (returns a 404 error). This 404 error could be due to how Cloud Run also utilizes the HOST header to route requests to the correct service.
Unfortunately this answer is not a solution, but it sheds some light on what you are seeing and why. I would go for using a different service from GCP, such a GCE that are pure virtual machines and less managed.

Nextjs 404s on buildManifest across multiple EC2 instances

Context: I have a simple Next.js and KeystoneJS app. I've made duplicate deployments on 2 AWS EC2 instances. Each instance also has an Nginx reverse proxy routing port 80 to 3000 (my apps port). The 2 instances are also behind an application load balancer.
Problem: When routing to my default url, my application attempts to fetch the buildManifest for the nextjs application. This, however, 404s most of the time.
My Guess: Because the requests are coming in so close together, my load balancer is routing the second request for the buildManifest to the other instance. Since I did a separate yarn build on that instance, the build ids are different, and therefore it is not fetching the correct build. This request 404s and my site is broken.
My Question: Is there a way to ensure that all requests made from instance A get routed to instance A? Or is there a better way to do my builds on each instance such that their ids are the same? Is this a use case for Docker?
I have had a similar problem with our load balancer and specifying a custom build id seems to have fixed it. Here's the dedicated issue and this is how my next.config.js looks like now:
const execSync = require("child_process").execSync;
const lastCommitCommand = "git rev-parse HEAD";
module.exports = {
async generateBuildId() {
return execSync(lastCommitCommand).toString().trim();
},
};
If you are using a custom build directory in your next.config.js file, then remove it and use the default build directory.
Like:
distDir: "build"
Remove the above line from your next.config.js file.
Credits: https://github.com/serverless-nextjs/serverless-next.js/issues/467#issuecomment-726227243

Select the service you wish to carry out a Google Task Handler

I am relatively new to Google Cloud Platform, and I am able to create app services, and manage databases. I am attempting to create a handler within Google Cloud Tasks (similar to the NodeJS sample found in this documentation.
However, the documentation fails to clearly address how to connect the deployed service with what is requesting. Necessity requires that I have more than one service in my project (one in Node for managing rest, and another in Python for managing geospatial data as asynchronous tasks).
My question: When running multiple services, how does Google Cloud Tasks know which service to direct the task towards?
Screenshot below as proof that I am able to request tasks to a queue.
When using App Engine routing for your tasks it will route it to the "default" service. However, you can overwrite this by defining AppEngineRouting, select your service, instance and version, the AppEngineHttpRequest field.
The sample shows a task routed to the default service's /log_payload endpoint.
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
},
};
You can update this to:
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
appEngineRouting: {
service: 'non-default-service'
}
},
};
Learn more about configuring routes.
I wonder which "services" you are talking about, because it always is the current service. These HTTP requests are basically being dispatched by HTTP headers HTTP_X_APPENGINE_QUEUENAME and HTTP_X_APPENGINE_TASKNAME... as you have them in the screenshot with sample-tasks and some random numbers. If you want to task other services, these will have to have their own task queue(s).

spring cloud data flow server cloud foundry redirect to https

I have been wrestling with this for a couple of days now. I want to deploy Spring Cloud Data Flow Server for Cloud Foundry to my org's enterprise Pivotal Cloud Foundry instance. My problem is forcing all Data Flow Server web requests to TLS/HTTPS. Here is an example of a configuration I've tried to get this working:
# manifest.yml
---
applications:
- name: gdp-dataflow-server
buildpack: java_buildpack_offline
host: dataflow-server
memory: 2G
disk_quota: 2G
instances: 1
path: spring-cloud-dataflow-server-cloudfoundry-1.2.3.RELEASE.jar
env:
SPRING_APPLICATION_NAME: dataflow-server
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.system.x.x.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: my-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: my-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: my-domain.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: user
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: pass
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: dataflow-mq
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack_offline
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: dataflow-db
SPRING_APPLICATION_JSON: |
{
"server": {
"use-forward-headers": true,
"tomcat": {
"remote-ip-header": "x-forwarded-for",
"protocol-header": "x-forwarded-proto"
}
},
"management": {
"context-path": "/management",
"security": {
"enabled": true
}
},
"security": {
"require-ssl": true,
"basic": {
"enabled": true,
"realm": "Data Flow Server"
},
"user": {
"name": "dataflow-admin",
"password": "nimda-wolfatad"
}
}
services:
dataflow-db
dataflow-redis
Despite the security block in SPRING_APPLICATION_JSON, the Data Flow Server's web endpoints are still accessible via insecure HTTP. How can I force all requests to HTTPS? Do I need to customize my own build of the Data Flow Server for Cloud Foundry? I understand that PCF's proxy is terminating SSL/TLS at the load balancer, but configuring the forward headers should induce Spring Security/Tomcat to behave the way I want, should it not? I must be missing something obvious here, because this seems like a common desire that should not be this difficult.
Thank you.
There's nothing out-of-the-box from Spring Boot proper to enable/disable HTTPS and at the same time also intercept and auto-redirect plain HTTP -> HTTPS.
There are several online literatures on how to write a custom Configuration class to accept multiple-connectors in Spring Boot (see example).
Spring Cloud Data Flow (SCDF) is a simple Spring Boot application, so all this applies to the SCDF-server as well.
That said, if you intend to enforce HTTPS all throughout your application interaction, there is a PCF setting [Disable HTTP traffic to HAProxy] that can be applied as a global override in Elastic Runtime - see docs. This consistently applies it to all the applications and it is not just specific to Spring Boot or SCDF. Even Python or Node or other types of apps can be enforced to interact via HTTPS with this setting.

Gitlab (AWS) authentication using on-premise LDAP (Win 2008 R2)

I have installed GitLab Omnibus Community Edition 8.0.2 for evaluation purpose. I am trying to connect Gitlab (Linux AMI on AWS) with our on-premise LDAP server running on Win 2008 R2. However, i am unable to do so. I am getting following error (Could not authorize you from Ldapmain because "Invalid credentials"):
Here's the config i'm using for LDAP in gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: 'XX.YYY.Z.XX'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
password: 'pwd1234'
active_directory: true
allow_username_or_email_login: true
base: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
user_filter: ''
EOS
There are two users: gitlab (newly created AD user) and john.doe (old AD user)
Both users are able to query all AD users using ldapsearch command but when i use their respective details (one at a time) in gitlab.rb and run gitlab-rake gitlab:ldap:check command, it displays info about that particular user only and not all users.
Earlier, gitlab-rake gitlab:ldap:check was displaying first 100 results from AD when my credential (john.doe) was configured in gitlab.rb file. Since this was my personal credential, i asked my IT team to create a new AD user (gitlab) for GitLab. After i configured new user (gitlab) in gitlab.rb file and ran gitlab-rake gitlab:ldap:check, it only displayed that particular user's record. I thought this might be due to some permission issue for the newly-created user so i restored my personal credentials in gitlab.rb. Surprisingly, now when i run gitlab-rake gitlab:ldap:check, i get only one record for my user instead of 100 records that i was getting earlier. This is really weird! I think, somehow, GitLab is "forgetting" previous details.
Any help will really be appreciated.
The issue is resolved now. Seems like it was a bug in the version (8.0.2) i was using. Upgrading it to 8.0.5 fixed my issue.
Also, values of bind_dn and base that worked for me are:
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
base: 'OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'