docker bind volume aws - amazon-web-services

I want to access to /home/ubuntu from inside a docker I'm running in a EC2 instance using ECS task. How can I configure the task so inside my docker by accesing to /local/ I access to /home/ubuntu?
My initial approach was to create a bind mount in the ECS task, with Volume name: local and no source path, but it seems I cannot access to it. The docker runs a python app using USER 1000, but python prompts me an error: [Errno 13] Permission denied: '/local'. Specifically, in python I'm running:
import os
import uuid
os.makedirs(f"/local/temp/{uuid.uuid4()}")
Using less /etc/passwd I see that ubuntu:x:1000:1000:Ubuntu:/home/ubuntu:/bin/bash and if change to user ubuntu (sudo su - ubuntu) it has access to /home/ubuntu.

Related

Pipenv Install in AWS Lambda Function

I am attempting to run the Algolia Docsearch scraper in a lambda environment at set intervals. Described here. I've got the docker container uploaded to ECR and attached to a lambda function, but when I run it the lambda errors because the entrypoint of the container is pipenv run python -m src.index. Pipenv is attempting to create a directory OSError: [Errno 30] Read-only file system: '/home/sbx_user1051'.
To combat this I created a EFS with an access point that the lambda has access to. The issue is the volume gets mounted at /mnt/... which is not where pipenv is trying to write. I'm a bit stuck here. Is there a clever way to get pipenv pointed at the EFS mount point?

No RDS environ when using SSH on AWS EC2

I installed Django on AWS EC2 and everything is working fine.
When trying to run commands to administrate Django, I receive an error because I'm missing some environment variables.
Environnement details:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Python 3.8 running on 64bit Amazon Linux 2/3.3.7
Tier: WebServer-Standard-1.0
For instance, running this from ssh session will fail and say the key doesn't exist:
source /var/app/venv/*/bin/activate
python3
import os
print(os.environ['RDS_DB_NAME'])
How can I get the env variables to be set and usable in SSH?
Note: When Django is run from the server everything works as expected and Django can access the DB, the goal is to be able to manually run commands.
Thank you
You have to manually load those env variables in EB, if you ssh to your instance:
export $(sudo cat /opt/elasticbeanstalk/deployment/env | xargs)

Cannot connect to the Docker daemon at unix:///var/run/docker.sock.( Gitlab )

I have a AWS instance with Docker installed on it. And some containers are running.I have setup one Laravel project inside docker.
I can access this web application through AWS IP address as well as DNS address(GoDaddy).
I have also designed gitlab CI/CO to publish the code to AWS instance.
When I try to push the code through Gitlab pipelines, I am getting following error in pipeline.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I checked the docker, it is running properly. Any clues please.
.gitlab-ci.yml
http://pastie.org/p/7ELo6wJEbFoKaz7jcmJdDp
the pipeline failing at deploy-api-staging: -> script -> scripts/ci/build
build script
http://pastie.org/p/1iQLZs5GqP2m5jthB4YCbh
deploy script
http://pastie.org/p/2ho6ElfN2iWRcIZJjQGdmy
From what I see, you have directly installed and registered the GitLab runner on your EC2 instance.
I think the problem is that you haven't already given permissions to your GitLab Runner user to use Docker.
From the official Docker documentation:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
Well, GitLab Runners use the user gitlab-runner by default when they're running any CI/CD Pipeline and that user won't use sudo (neither it should be in the sudoers file!) so we have to correctly configure it.
First of all, create a Docker group on the EC2 where the GitLan Runner is registered:
sudo groupadd docker
Then, we are going to add the user gitlab-runner to that group:
sudo usermod -aG docker gitlab-runner
And we are going to verify that the gitlab-runner user actually has access to Docker:
sudo -u gitlab-runner -H docker info
Now your Pipelines should be able to access without any problem to the Unix socket under unix:///var/run/docker.sock.
Additional Steps if using Docker Runners
If you're using the Docker executor in your runner, you have to now mount that Unix socket on the Docker image you're using.
[[runners]]
url = "https://gitlab.com/"
token = REGISTRATION_TOKEN
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
Take special care of the contents in the volume clause.

Startup script doesn't run Airflow webserver VM GCP

I'm trying to automatically run airflow webserver and scheduler in a VM upon boot using startup scripts just followed the documentation here: https://cloud.google.com/compute/docs/instances/startup-scripts/linux . Here is my script:
export AIRFLOW_HOME=/home/name/airflow
cd /home/name/airflow
nohup airflow scheduler >> scheduler.log &
nohup airflow webserver -p 8080 >> webserver.log &
The .log files are created which means the script is been executed but the webserver and the scheduler don't.
Any apparent reason?
I have tried replicating the Airflow webserver Startup script on GCP VM using the document.
Steps followed to run Airflow webserver Startup script on GCP VM :
Create a Service Account. Give minimum access to BigQuery with the role of BigQuery Job User and Dataflow with the role of Dataflow Worker. Click Add Key/Create new key/Done. This will download a JSON file.
Create a Compute Engine instance. Select the Service Account created.
Install Airflow libraries. Create a virtual environment using miniconda.
Init your metadata database and register at least one admin user using command:
airflow db init
airflow users create -r Admin -u username -p mypassword -e example#mail.com -f yourname -l lastname
Whitelist IP for port 8080. Create Firewall Rule and add firewall rule on GCP VM instance. Now go to terminal and start web server using command
airflow webserver -p 8080.
Open another terminal and start the Scheduler.
export AIRFLOW_HOME=/home/acachuan/airflow-medium
cd airflow-medium
conda activate airflow-medium
airflow db init
airflow scheduler
We want our Airflow to start immediately after the Compute Engine starts. So we can create a Cloud Storage bucket and then create a script, upload the file and keep it as a backup.
Now pass a Linux startup script from Cloud Storage to a VM. Refer Passing a startup script that is stored in Cloud Storage to an existing VM. You can also pass a startup script to an existing VM.
Note : PermissionDenied desc = The caller does not have permission means you don’t have sufficient permissions, you need to request access from your project, folder, or organization admin. Depending on the assets you are trying to export. And to access files which are created by root users you need read, write or execute permissions. Refer File permissions.

How to run Jupyter notebook on AWS instance

How to run Jupyter notebook on AWS instance, chmod 400 error
I want to run my jupyter notebooks in the cloud, ec2 AWS instance.
--
I'm following this tutorial:
https://www.codingforentrepreneurs.com/blog/jupyter-notebook-server-aws-ec2-aws-vpc
--
I have the Instance ec2 all set up as well as nginx.
--
Problem is..
When typing chmod 400 JupyterKey.pem just work for MAC not Windowns Power shell
cd path/to/my/dev/folder/
chmod 400 JupyterKey.pem
ssh ubuntu#34.235.154.196 -i JupyterKey.pem
Error: The term 'chmod' is not recognized as the name of a cmdlet, function, cript, or operation
category info: ObjectNotFound
FullyQualifiedErrorId: Command notFoundException
AWS has a managed Jupyter Notebook service as part of Amazon SageMaker.
SageMaker hosted notebook instances enable you to easily spin up a Jupyter Notebook with one click, with pay per hour pricing (similar to EC2 billing), and with the ability to easily upload your existing notebook directly onto the managed instance, all directly through the instance URL + AWS console.
Check out this tutorial for a guide on getting started!
I had the same permission problem and could fix it by running the following command in the Amazon Machine Image Linux:
sudo chown user:user ~/certs/mycert.pem