I am running an elasticsearch service in docker in aws, and i have another docker container that runs a dotnet application; this is an excerpt of the docker-compose file i am using
version: '3.1'
services:
someservice:
build:
context: ./
dockerfile: Dockerfile
args:
- AWS_ACCESS_KEY_ID
- AWS_REGION
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN
restart: always
container_name: some-server
ports:
- 8080:8080
links:
- elastic
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
environment:
TAKE_FILE_OWNERSHIP: "true"
volumes:
- ./.data/elastic:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
ulimits:
nofile:
soft: "65536"
hard: "65536"
and this is an excerpt of the Dockerfile
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
WORKDIR /app
RUN mkdir /Service/
WORKDIR /app/Service/
COPY local_foler/Service/*.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY local_folder/Service/. ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV ASPNETCORE_URLS="http://0.0.0.0:8080"
EXPOSE 8080
WORKDIR /app
COPY --from=build-env /app/Service/out .
ENTRYPOINT ["dotnet", "Service.dll"]
in the Service application i am using NEST client, something like this
var settings = new ConnectionSettings(new Uri(ELASTIC_SERVICE_URI)).DefaultIndex("some-index").EnableDebugMode();
var client = new ElasticClient(settings);
var pingResponse = client.Ping();
if (pingResponse.IsValid)
{
string searchTerm = "*";
if (searchParam != null)
{
searchTerm = "*" + WebUtility.UrlDecode(searchParam) + "*";
}
int from = (int)(fromParam == null ? 0 : fromParam);
int size = (int)(sizeParam == null ? 10 : sizeParam);
var searchResponse = client.Search<SomeType>(s => s
.From(from)
.Size(size)
.AllTypes()
.Query(q => q
.Bool(bq => bq
.Must(m => m
.QueryString(qs => qs
.Query(searchTerm)
.AnalyzeWildcard(true)
.DefaultField("*")
)
)
)
)
);
return Ok(searchResponse.Documents);
if i run the docker containers locally, i can set the ELASTIC_SERVICE_URI const to http://elastic:9200 and it works; i even can point that to the elastic service running in aws like this https://HOST.us-east-2.es.amazonaws.com
in both cases the service works fine and the data from the search is returning
but when i run the containers from aws the data is not retrieved, i just get an empty collection
what can be wrong ?
Related
I am developing in Visual Studio Code (Windows) using the .devcontainer. I am doing some testing using django and angular frameworks. Everything is working perfectly but when inside the container I run the command ng new angular-14-crud-example I have some problems: if after this I restart for any reason Visual Studio Code, the devcontainer does not restart anymore and returns the following error:
failed to solve: error from sender: open C:\Users\user\project\angular-14-crud-example\node_modules\make-fetch-hap
pen\node_modules\mkdirp: Access denied.
Below are the details:
Django_Dockerfile:
FROM mcr.microsoft.com/devcontainers/anaconda:0-3
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then umask 0002 && /opt/conda/bin/conda env update -n base -f /tmp/conda-tmp/environment.yml; fi \
&& rm -rf /tmp/conda-tmp
RUN pip install --upgrade pip
RUN mkdir /workspace
WORKDIR /workspace
COPY requirements.txt /workspace/
RUN pip install -r requirements.txt
COPY . /workspace/
docker-compose.yml
version: '3.8'
services:
app:
build:
context: ..
dockerfile: .devcontainer/Django_Dockerfile
env_file:
- .env
volumes:
- ../..:/workspaces:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
db:
image: postgres:latest
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
env_file:
- .env
volumes:
postgres-data:
devcontainer.json:
{
"name": "Anaconda (Python 3) & PostgreSQL",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"features": {
"ghcr.io/devcontainers-contrib/features/angular-cli:1": {}
},
}
Some deteailed error Log
[2023-01-02T13:39:52.218Z] Dev Containers 0.266.1 in VS Code 1.74.2 (e8a3071ea4344d9d48ef8a4df2c097372b0c5161).
[2023-01-02T13:39:52.218Z] Start: Resolving Remote
[2023-01-02T13:39:52.268Z] Setting up container for folder or workspace: c:\Users\user\project
[2023-01-02T13:39:52.288Z] Start: Check Docker is running
[2023-01-02T13:39:52.290Z] Start: Run: docker version --format {{.Server.APIVersion}}
[2023-01-02T13:39:53.819Z] Stop (1529 ms): Run: docker version --format {{.Server.APIVersion}}
[2023-01-02T13:39:53.820Z] Server API version: 1.41
[2023-01-02T13:39:53.823Z] Stop (1535 ms): Check Docker is running
[2023-01-02T13:39:53.825Z] Start: Run: docker volume ls -q
[2023-01-02T13:39:54.907Z] Stop (1082 ms): Run: docker volume ls -q
[2023-01-02T13:39:55.293Z] Start: Run: docker ps -q -a --filter label=vsch.local.folder=c:\Users\user\project --filter label=vsch.quality=stable
[2023-01-02T13:39:56.247Z] Stop (954 ms): Run: docker ps -q -a --filter label=vsch.local.folder=c:\Users\user\project --filter label=vsch.quality=stable
[2023-01-02T13:39:56.248Z] Start: Run: C:\Users\user\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Users\user\project --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Users\user\project --log-level debug --log-format json --config c:\Users\user\project\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2023-01-02T13:39:57.189Z] (node:18596) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
[2023-01-02T13:39:57.190Z] (Use `Code --trace-deprecation ...` to show where the warning was created)
[2023-01-02T13:39:57.194Z] #devcontainers/cli 0.25.2. Node.js v16.14.2. win32 10.0.19044 x64.
[2023-01-02T13:39:57.194Z] Start: Run: docker buildx version
[2023-01-02T13:39:59.057Z] Stop (1863 ms): Run: docker buildx version
[2023-01-02T13:39:59.058Z] github.com/docker/buildx v0.9.1 ed00243a0ce2a0aee75311b06e32d33b44729689
[2023-01-02T13:39:59.058Z]
[2023-01-02T13:39:59.059Z] Start: Resolving Remote
[2023-01-02T13:39:59.072Z] Start: Run: docker-compose version --short
[2023-01-02T13:40:01.002Z] Stop (1930 ms): Run: docker-compose version --short
[2023-01-02T13:40:01.003Z] Docker Compose version: 2.12.0
[2023-01-02T13:40:01.006Z] Start: Run: docker ps -q -a --filter label=com.docker.compose.project=mydemo_devcontainer --filter label=com.docker.compose.service=app
[2023-01-02T13:40:02.023Z] Stop (1017 ms): Run: docker ps -q -a --filter label=com.docker.compose.project=mydemo_devcontainer --filter label=com.docker.compose.service=app
[2023-01-02T13:40:02.026Z] Start: Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:03.955Z] Stop (1929 ms): Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:03.955Z] name: devcontainer
services:
app:
build:
context: c:\Users\user\project
dockerfile: .devcontainer/Django_Dockerfile
command:
- sleep
- infinity
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
network_mode: service:db
volumes:
- type: bind
source: c:\Users\user\project
target: /workspaces
bind:
create_host_path: true
db:
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
image: postgres:latest
networks:
default: null
restart: unless-stopped
volumes:
- type: volume
source: postgres-data
target: /var/lib/postgresql/data
volume: {}
networks:
default:
name: devcontainer_default
volumes:
postgres-data:
name: devcontainer_postgres-data
[2023-01-02T13:40:03.969Z] Start: Run: docker events --format {{json .}} --filter event=start
[2023-01-02T13:40:04.315Z] PersistedPath=c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data, ContainerHasLabels=false
[2023-01-02T13:40:04.320Z] Start: Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:06.248Z] Stop (1928 ms): Run: docker-compose -f c:\Users\user\project\.devcontainer\docker-compose.yml --profile * config
[2023-01-02T13:40:06.248Z] name: devcontainer
services:
app:
build:
context: c:\Users\user\project
dockerfile: .devcontainer/Django_Dockerfile
command:
- sleep
- infinity
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
network_mode: service:db
volumes:
- type: bind
source: c:\Users\user\project
target: /workspaces
bind:
create_host_path: true
db:
environment:
POSTGRES_DB: postgres
POSTGRES_HOST: localhost
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
image: postgres:latest
networks:
default: null
restart: unless-stopped
volumes:
- type: volume
source: postgres-data
target: /var/lib/postgresql/data
volume: {}
networks:
default:
name: devcontainer_default
volumes:
postgres-data:
name: devcontainer_postgres-data
[2023-01-02T13:40:06.257Z] Start: Run: docker inspect --type image mcr.microsoft.com/devcontainers/anaconda:0-3
[2023-01-02T13:40:07.211Z] Stop (954 ms): Run: docker inspect --type image mcr.microsoft.com/devcontainers/anaconda:0-3
[2023-01-02T13:40:07.881Z] local container features stored at: c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\node_modules\vscode-dev-containers\container-features
[2023-01-02T13:40:07.887Z] Start: Run: tar --no-same-owner -x -f -
[2023-01-02T13:40:08.644Z] Stop (757 ms): Run: tar --no-same-owner -x -f -
[2023-01-02T13:40:08.661Z] * Processing feature: ghcr.io/devcontainers-contrib/features/angular-cli:1
[2023-01-02T13:40:09.393Z] * Fetching feature: angular-cli_1_oci
[2023-01-02T13:40:10.457Z] Start: Run: docker build -t dev_container_feature_content_temp -f C:\Users\D525C~1.SAN\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1672666807879\Dockerfile.buildContent C:\Users\D525C~1.SAN\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1672666807879
[2023-01-02T13:40:11.323Z]
[2023-01-02T13:40:12.053Z]
[...]
[2023-01-02T13:41:36.506Z]
[+] Building 78.3s (7/22)
=> [internal] load build definition from Dockerfile-with-features 0.1s
=> => transferring dockerfile: 3.36kB 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/dev_container_feature_ 0.0s
=> [internal] load metadata for mcr.microsoft.com/devcontainers/anaconda 0.4s
=> [dev_containers_feature_content_source 1/1] FROM docker.io/library/de 0.0s
=> [dev_container_auto_added_stage_label 1/11] FROM mcr.microsoft.com/d 0.0s
=> ERROR [internal] load build context 77.6s
=> => transferring context: 265.86MB 77.5s
------
> [internal] load build context:
------
failed to solve: error from sender: open C:\Users\user\project\angular-14-crud-example\node_modules\make-fetch-hap
pen\node_modules\mkdirp: Accesso negato.
[2023-01-02T13:41:36.669Z] Stop (81610 ms): Run: docker-compose --project-name mydemo_devcontainer -f c:\Users\user\project\.devcontainer\docker-compose.yml -f c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data\docker-compose\docker-compose.devcontainer.build-1672666815049.yml build
[2023-01-02T13:41:36.671Z] Error: Command failed: docker-compose --project-name mydemo_devcontainer -f c:\Users\user\project\.devcontainer\docker-compose.yml -f c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data\docker-compose\docker-compose.devcontainer.build-1672666815049.yml build
[2023-01-02T13:41:36.671Z] at pF (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1850:431)
[2023-01-02T13:41:36.671Z] at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-01-02T13:41:36.672Z] at async foe (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1850:2457)
[2023-01-02T13:41:36.672Z] at async loe (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1832:2396)
[2023-01-02T13:41:36.672Z] at async Poe (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1899:2301)
[2023-01-02T13:41:36.672Z] at async Zf (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1899:3278)
[2023-01-02T13:41:36.673Z] at async aue (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:2020:15276)
[2023-01-02T13:41:36.673Z] at async oue (c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:2020:15030)
[2023-01-02T13:41:36.707Z] Stop (100459 ms): Run: C:\Users\user\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Users\user\project --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Users\user\project --log-level debug --log-format json --config c:\Users\user\project\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2023-01-02T13:41:36.708Z] Exit code 1
[2023-01-02T13:41:36.716Z] Command failed: C:\Users\user\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\user\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\user\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Users\user\project --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Users\user\project --log-level debug --log-format json --config c:\Users\user\project\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2023-01-02T13:41:36.716Z] Exit code 1
I'm looking to create sql views after the django tables are built as the views rely on tables created by Django models.
The problem is that, when trying to run a python script via a Dockerfile CMD calling entrypoint.sh
I get the following issue with the hostname when trying to connect to the postgresql database from the create_views.py
I've tried the following hostnames options: localhost, db, 0.0.0.0, 127.0.0.1 to no avail.
e.g.
psycopg2.OperationalError: connection to server at "0.0.0.0", port 5432 failed: Connection refused
could not translate host name "db" to address: Temporary failure in name resolution
connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
I can't use the containers IP address as everytime you start up docker-compose up you get different IP's for the containers...
docker-compose.yml
services:
app:
container_name: django-mhb-0.3.1
build:
context: .
ports:
- "8000:8000"
volumes:
- ./myproject/:/app/
environment:
- DB_HOST=db
- DB_NAME=${POSTGRES_DB}
- DB_USER=${POSTGRES_USER}
- DB_PWD=${POSTGRES_PASSWORD}
depends_on:
- "postgres"
postgres:
container_name: postgres-mhb-0.1.1
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data/
# The following works. However, it runs before the Dockerfile entrypoint script.
# So in this case its trying to create views before the tables exist.
#- ./myproject/sql/:/docker-entrypoint-initdb.d/
ports:
- "5432:5432"
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
postgres_data:
Docker environment variables are in a .env file in the same directory as the Dockerfile and docker-compose.yml
Django secrets are in secrets.json file in django project directory
Dockerfile
### Dockerfile for Django Applications ###
# Pull Base Image
FROM python:3.9-slim-buster AS myapp
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# Compiler and OS libraries
RUN apt-get update\
&& apt-get install -y --no-install-recommends build-essential curl libpq-dev \
&& rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man \
&& apt-get clean \
&& useradd --create-home python
# install dependencies
COPY --chown=python:python ./requirements.txt /tmp/requirements.txt
COPY --chown=python:python ./scripts /scripts
ENV PATH="/home/python/.local/bin:$PATH"
RUN pip install --upgrade pip \
&& pip install -r /tmp/requirements.txt \
&& rm -rf /tmp/requirements.txt
USER python
# Section 5- Code and User Setup
ENV PATH="/scripts:$PATH"
CMD ["entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "start of entrypoint"
set -e
whoami
pwd
#ls -l
#cd ../app/
#ls -l
python manage.py wait_for_db
python manage.py makemigrations
python manage.py migrate
python manage.py djstripe_sync_models product plan
python manage.py shell < docs/build-sample-data.py
## issue arises running this script ##
python manage.py shell < docs/create_views.py
python manage.py runserver 0.0.0.0:8000
create_views.py
#!/usr/bin/env python
import psycopg2 as db_connect
def get_connection():
try:
return db_connect.connect(
database="devdb",
user="devuser",
password="devpassword",
host="0.0.0.0",
port=5432,
)
except (db_connect.Error, db_connect.OperationalError) as e:
#t_msg = "Database connection error: " + e + "/n SQL: " + s
print('t_msg ',e)
return False
try:
conn = get_connection()
...
I've removed the rest of the script as it's unnecessary
When I run the Django/postgresql outside of docker on local machine localhost works fine as you would expect.
Hoping someone can help, it's doing my head in and I've spent a few days looking for a possible answwer.
Thanks
Thanks to hints from Erik, solved by the following
python manage.py makemigrations --empty yourappname
Then added the following (note cut down for space)
from django.db import migrations
def get_all_items_view(s=None):
s = ""
s += "create or replace view v_all_items_report"
s += " as"
s += " SELECT project.id, project.slug, project.name as p_name,"
...
return s
def get_full_summary_view(s=None):
s = ""
s += "CREATE or replace VIEW v_project_summary"
s += " AS"
s += " SELECT project.id, project.slug, project.name as p_name,
...
return s
class Migration(migrations.Migration):
dependencies = [
('payment', '0002_payment_user'),
]
operations = [
migrations.RunSQL(get_all_items_view()),
migrations.RunSQL(get_full_summary_view()),
migrations.RunSQL(get_invoice_view()),
migrations.RunSQL(get_payment_view()),
]
Note to ensure you list the last table that needs to be created in your dependencies before the views get created. In my case django defaulted all other migration to be dependant on one another in a chain order.
In my dockerfile entrypoint.sh script is where I had the commands to makemigrations, migrate and build some sample data
I have trouble with setting up minIO in my docker-compose. I found this problem on several websites and tried to make it work. But I failed :D
Anyway, if anyone is able to help me I will call him my personal hero.
Here is my code:
# docker-compose.yml
minio:
container_name: minio
image: minio/minio
ports:
- "9000:9000"
volumes:
- ./minio-data:/data
env_file:
- app/.env
command: server /minio-data
mc:
image: minio/mc
depends_on:
- minio
entrypoint: >
/bin/sh -c "
until (/usr/bin/mc config host add myminio http://minio:9000 access-key secret-key) do echo '...waiting...' && sleep 1; done;
/usr/bin/mc mb myminio/local-bucket/;
/usr/bin/mc policy set download myminio/local-bucket;
exit 0;
"
# settings.py
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_ACCESS_KEY_ID = env.str("MINIO_ACCESS_KEY", default='access-key')
AWS_SECRET_ACCESS_KEY = env.str("MINIO_SECRET_KEY", default='secret-key')
AWS_STORAGE_BUCKET_NAME = env.str("AWS_STORAGE_BUCKET_NAME", default='local-bucket')
MINIO_STORAGE_USE_HTTPS = False
if DEBUG:
AWS_S3_ENDPOINT_URL = env.str("AWS_S3_ENDPOINT_URL", default='http://minio:9000')
# .env
MINIO_ACCESS_KEY=access-key
MINIO_SECRET_KEY=secret-key
AWS_STORAGE_BUCKET_NAME=local-bucket
AWS_S3_ENDPOINT_URL=http://minio:9000
And thats my console logs:
I want to Do CI/CD with CircleCI to ECR, ECS.
Dockerfiles works correctly in local with docker-compose.
but, I am getting the following error in CircleCI.
COPY failed: stat /var/lib/docker/tmp/docker-builder505910231/b-plus-app/build: no such file or directory
Here is the relevant code where the error occurred.
↓Dockerfile(react)↓
FROM node:14.17-alpine
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN rm -r -f b-plus-app/build && cd b-plus-app \
&& rm -r -f node_modules && npm i && npm run build
↓Dockerfile(nginx)↓
FROM nginx:1.15.8
RUN rm -f /etc/nginx/conf.d/*
RUN rm -r -f /usr/share/nginx/html
#Stop Here
COPY b-plus-app/build /var/www
COPY prod_conf/ /etc/nginx/conf.d/
CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf
↓.circleci/config.yml↓
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.15
aws-ecs: circleci/aws-ecs#2.0.0
workflows:
react-deploy:
jobs:
- persist_to_workspace:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL
region: AWS_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
create-repo: true
path: 'front/'
repo: front
tag: "${CIRCLE_SHA1}"
filters:
branches:
only: main
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'b_plus_service'
cluster-name: 'b-plus'
service-name: 'b-plus'
container-image-name-updates: "container=front,tag=${CIRCLE_SHA1}"
nginx-deploy:
jobs:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL
region: AWS_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
create-repo: true
dockerfile: Dockerfile.prod
path: 'front/'
repo: nginx
tag: "${CIRCLE_SHA1}"
#requires:
# - react-deploy:
# - rails-deploy:
filters:
branches:
only: main
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'b_plus_service'
cluster-name: 'b-plus'
service-name: 'b-plus'
container-image-name-updates: "container=nginx,tag=${CIRCLE_SHA1}"
If you know how to fix the problem, please let me know. Thank you for reading my question.
Disclaimer: sorry for my bad english and I am new to angular, django and production.
I am trying to push the first draft of what I've made into a local production server I own running CentOS 7.
Up until now i was working in dev mode with proxy.config.json to bind between the Django and Angular so far so good.
{
"/api": {
"target": "example.me",
"secure": false,
"logLevel": "debug",
"changeOrigin": true
}
}
when I wanted to push to production however i failed to bind the container frontend with backend. these are the setup i made
Containerizing angular and putting the compiled files in an NGINX container -- port 3108
Containerizing Django and running gunicorn -- port 80
Postgres image
DockerFiles and Docker-compose
Django Dockerfile
FROM python:3.8
# USER app
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
Angular Dockerfile
FROM node:12.16.2 AS build
LABEL version="0.0.1"
WORKDIR /app
COPY ["package.json","npm-shrinkwrap.json*","./"]
RUN npm install -g #angular/cli
RUN npm install --silent
COPY . .
RUN ng build --prod
FROM nginx:latest
RUN rm -rf /usr/share/nginx/html/*
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist/frontend /usr/share/nginx/html
EXPOSE "3108"
Docker-compose
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=database_name
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- app
backend:
build: .
command: gunicorn --bind 0.0.0.0:80 myproject.wsgi
volumes:
- .:/code
ports:
- "80:80"
networks:
- app
depends_on:
- db
frontend:
build:
context: ../frontend
dockerfile: Dockerfile
command: nginx -g "daemon off;"
ports:
- "3108:3108"
networks:
- app
depends_on:
- backend
networks:
app:
NGINX config file
server {
listen 3108;
server_name example.me;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
I tried to mimic the the angular proxy by adding
following one answer from this post when he talked about docker
location /api {
proxy_pass example.me;
}
this resulted of the backend return Error 403
then I changed the BaseEndPoint to request directly from the server and added corsheaders to django and started getting a 401 Error.
environment.prod
export const environment = {
production: true,
ConfigApi: {
BaseEndPoint: 'example.me',
LoginEndPoint: '/api/account/login/',
RegisterEndPoint: '/api/account/registration/',
MembersList: '/api/membres_list/',
Meeting: '/api/meeting/create_list/',
TaskList: '/api/task_list/create/',
}
};
I can't point out the issue or its source; i should point out that request from Postman to the backend works just fine.
TL;DR
Backend keeps rejecting Frontend requests with 403 or 401 and i don't know why