I am attempting to run the Algolia Docsearch scraper in a lambda environment at set intervals. Described here. I've got the docker container uploaded to ECR and attached to a lambda function, but when I run it the lambda errors because the entrypoint of the container is pipenv run python -m src.index. Pipenv is attempting to create a directory OSError: [Errno 30] Read-only file system: '/home/sbx_user1051'.
To combat this I created a EFS with an access point that the lambda has access to. The issue is the volume gets mounted at /mnt/... which is not where pipenv is trying to write. I'm a bit stuck here. Is there a clever way to get pipenv pointed at the EFS mount point?
Related
I want to access to /home/ubuntu from inside a docker I'm running in a EC2 instance using ECS task. How can I configure the task so inside my docker by accesing to /local/ I access to /home/ubuntu?
My initial approach was to create a bind mount in the ECS task, with Volume name: local and no source path, but it seems I cannot access to it. The docker runs a python app using USER 1000, but python prompts me an error: [Errno 13] Permission denied: '/local'. Specifically, in python I'm running:
import os
import uuid
os.makedirs(f"/local/temp/{uuid.uuid4()}")
Using less /etc/passwd I see that ubuntu:x:1000:1000:Ubuntu:/home/ubuntu:/bin/bash and if change to user ubuntu (sudo su - ubuntu) it has access to /home/ubuntu.
I need to fetch a Docker image from AWS Elastic Container Registry and I need to do as a non-root user, apparently. I have a basic install.sh script that sets up my AWS EC2 instance, so every time I launch a new instance I can just theoretically run this script and it will install programs, fetch the container, and set up my system how I want it to be.
However, the workaround that Docker provides for managing Docker as a non-root user (see below) does not work when executed from within a script. The reason, as I understand, is that the last line executes a subshell, and I can't do that from within the script.
sudo groupadd docker
sudo usermod -aG docker ${USER}
newgrp docker # cannot be executed from within a script
Is there any way around this? Or do I just have to execute all three lines AND pull the container manually everytime?
I have to mount the s3 bucket over docker container so that we can store its contents in an s3 bucket.
I found https://www.youtube.com/watch?v=FFTxUlW8_QQ&ab_channel=ValaxyTechnologies video which shows how to do the same process for ec2 instance instead of a docker container.
I am following the same steps as mentioned in the above link. Likewise, I have done the following things on the docker container:
(Install FUSE Packages)
apt-get install build-essential gcc libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support pkg-config libxml++2.6-dev libssl-dev
git clone https://github.com/s3fs-fuse/s3fs-fus...
cd s3fs-fuse
./autogen.sh
./configure
make
make install
(Ensure you have an IAM Role with Full Access to S3)
(Create the Mountpoint)
mkdir -p /var/s3fs-demo-fs
(Target Bucket)
aws s3 mb s3://s3fs-demo-bkt
But when I trying to mount the s3 bucket using
s3fs s3fs-demo-bkt /var/s3fs-demo-fs -o iam_role=
I am getting the following messege
fuse: device not found, try 'modprobe fuse' first
I have looked over several solutions for this problem. But I am not able to resolve this issue. Please let me know how I can solve it.
I encountered the same problem. But later the issue was fixed by adding --privileged when running the docker run command
How to run Jupyter notebook on AWS instance, chmod 400 error
I want to run my jupyter notebooks in the cloud, ec2 AWS instance.
--
I'm following this tutorial:
https://www.codingforentrepreneurs.com/blog/jupyter-notebook-server-aws-ec2-aws-vpc
--
I have the Instance ec2 all set up as well as nginx.
--
Problem is..
When typing chmod 400 JupyterKey.pem just work for MAC not Windowns Power shell
cd path/to/my/dev/folder/
chmod 400 JupyterKey.pem
ssh ubuntu#34.235.154.196 -i JupyterKey.pem
Error: The term 'chmod' is not recognized as the name of a cmdlet, function, cript, or operation
category info: ObjectNotFound
FullyQualifiedErrorId: Command notFoundException
AWS has a managed Jupyter Notebook service as part of Amazon SageMaker.
SageMaker hosted notebook instances enable you to easily spin up a Jupyter Notebook with one click, with pay per hour pricing (similar to EC2 billing), and with the ability to easily upload your existing notebook directly onto the managed instance, all directly through the instance URL + AWS console.
Check out this tutorial for a guide on getting started!
I had the same permission problem and could fix it by running the following command in the Amazon Machine Image Linux:
sudo chown user:user ~/certs/mycert.pem
I have a user-data script file when launching an EC2 instance from an AMI image.
The script uses AWS but I get "aws: command not found".
The AWS-CLI is installed as part of the AMI (I can use it once the instance is up) but for some reason the script cannot find it.
Am I missing something? any chance that the user-data script runs before the image is loaded (I find it hard to believe)?
Maybe the path env variable is not set at this point?
Thanks,
any chance that the user-data script runs before the image is loaded
No certainly not. It is a service on that image that runs the script.
Maybe the path env variable is not set at this point
This is most likely the issue. The scripts run as root not ec2-user, and don't have access to the path you may have configured in your ec2-user account. What happens if you try specifying /usr/bin/aws instead of just aws?
You can install aws cli and set up environment variables with your credentials. For example, in the user data script, you can write something like:
#!/bin/bash
apt-get install -y awscli
export AWS_ACCESS_KEY_ID=your_access_key_id_here
export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here
aws cp s3://test-bucket/something /local/directory/
In case you are using a CentOS based AMI, then you have to change apt-get line for yum, and the package is called aws-cli instead of awscli.