I installed Django on AWS EC2 and everything is working fine.
When trying to run commands to administrate Django, I receive an error because I'm missing some environment variables.
Environnement details:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Python 3.8 running on 64bit Amazon Linux 2/3.3.7
Tier: WebServer-Standard-1.0
For instance, running this from ssh session will fail and say the key doesn't exist:
source /var/app/venv/*/bin/activate
python3
import os
print(os.environ['RDS_DB_NAME'])
How can I get the env variables to be set and usable in SSH?
Note: When Django is run from the server everything works as expected and Django can access the DB, the goal is to be able to manually run commands.
Thank you
You have to manually load those env variables in EB, if you ssh to your instance:
export $(sudo cat /opt/elasticbeanstalk/deployment/env | xargs)
Related
I want to access to /home/ubuntu from inside a docker I'm running in a EC2 instance using ECS task. How can I configure the task so inside my docker by accesing to /local/ I access to /home/ubuntu?
My initial approach was to create a bind mount in the ECS task, with Volume name: local and no source path, but it seems I cannot access to it. The docker runs a python app using USER 1000, but python prompts me an error: [Errno 13] Permission denied: '/local'. Specifically, in python I'm running:
import os
import uuid
os.makedirs(f"/local/temp/{uuid.uuid4()}")
Using less /etc/passwd I see that ubuntu:x:1000:1000:Ubuntu:/home/ubuntu:/bin/bash and if change to user ubuntu (sudo su - ubuntu) it has access to /home/ubuntu.
I am working on mounting a Cloud Storage Bucket to my Cloud Run App, using the example and code from the official tutorial https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse
The application uses docker only (no cloudbuild.yaml)
The docker file compiles with out issue using command:
docker build --platform linux/amd64 -t fusemount .
I then start docker run with the following command
docker run --rm -p 8080:8080 -e PORT=8080 fusemount
and when run gcsfuse is triggered with both the directory endpoint and the bitbucket URL
gcsfuse --debug_gcs --debug_fuse gs://<my-bucket> /mnt/gs
But the connection fails:
022/12/11 13:54:35.325717 Start gcsfuse/0.41.9 (Go version go1.18.4)
for app "" using mount point: /mnt/gcs 2022/12/11 13:54:35.618704
Opening GCS connection...
2022/12/11 13:57:26.708666 Failed to open connection: GetTokenSource:
DefaultTokenSource: google: could not find default credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
I have already set up the application-defaut credentials with the following command:
gcloud auth application-default login
and I have a python based cloud function project that I have tested on the same local machine which has no problem accessing the same storage bucket with the same default login credentials.
What am I missing?
Google libraries search for ~/.config/gcloud when using APPLICATION_DEFAULT authorization approach.
Your local Docker container doesn't contain this config when running locally.
So, you might want to mount it when running a container:
$ docker run --rm -v /home/$USER/.config/gcloud:/root/.config/gcloud -p 8080:8080 -e PORT=8080 fusemount
Some notes:
I'm not sure which OS you are using, so that replace /home/$USER with a real path to your home
Same, I'm not sure your image has /root home, so make sure that path from 1. is mounted properly
Make sure your local user is authorized to gcloud cli, as you mentioned, using this command gcloud auth application-default login
Let me know, if this helped.
If you are using docker and not using Google Compute engine (GCE), did you try mounting service account key when running container and using that key while mounting GCSFuse ?
If you are building and deploying to Cloud run, did you grant required permissions mentioned in https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse#ship-code?
I am attempting to gain shell level access from a windows machine to a Linux ECS Task in an AWS Fargate Cluster via the AWS CLI (v2.1.38) through aws-vault.
The redacted command I am using is
aws-vault exec my-profile -- aws ecs execute-command --cluster
my-cluster-name --task my-task-id --interactive --command "/bin/sh"
but this fails with this output
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
Starting session with SessionId: ecs-execute-command-0bc2d48dbb164e010
SessionId: ecs-execute-command-0bc2d48dbb164e010 :
----------ERROR-------
Unable to start shell: Failed to start pty: fork/exec C:/Program: no such file or directory
I can see that ECS Exec is enabled on this task because an aws describe shows the following.
It appears that its recognising the host is a windows machine and attempting to initialise based on a variable that is windows specific.
Is anyone able to suggest what I can do to resolve this.
Ran into the same error. Using --command "bash" worked for me on Windows 10.
I was using windows 7, I think without WSL (Windows 10+) or Linux (or Mac) it just doesn't work. There's another suggestion explained here that's not worth the trouble:
Cannot start an AWS ssm session on EC2 Amazon linux instance
For me, I just used a Linux bastion inside AWS and it worked from there.
Using a windows powershell to run this command worked for me
Ran into a similar issue. Not all docker containers required bash.
Try using:
--command "sh"
I am currently unable to access my RDS environment variables in the EC2 instance. They are both linked using Elastic Beanstalk.
I am trying to use the RDS environment variables in a PHP script using the $_SERVER global variable but every time I check on the console these are always empty strings. Also if I run echo ${RDS_HOSTNAME} on the console I also get an empty string.
However when I run /opt/elasticbeanstalk/bin/get-config environment I get the following with the correct credentials.
{
"COMPOSER_HOME":"/root",
"RDS_DB_NAME":"dbname",
"RDS_HOSTNAME":"dbhost.rds.amazonaws.com",
"RDS_PASSWORD":"dbpassword",
"RDS_PORT":"3306",
"RDS_USERNAME":"dbusername"
}
I've also connected to the database via the mysql command just to make sure that the EC2 instance can access the RDS database and it worked.
Using mysql -u dbusername -h dbhost.rds.amazonaws.com -p
I want to run a service on Google Cloud Run that uses Cloud Memorystore as cache.
I created an Memorystore instance in the same region as Cloud Run and used the example code to connect: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/memorystore/redis/main.go this didn't work.
Next I created a Serverless VPC access Connectore which didn't help. I use Cloud Run without a GKE Cluster so I can't change any configuration.
Is there a way to connect from Cloud Run to Memorystore?
To connect Cloud Run (fully managed) to Memorystore you need to use the mechanism called "Serverless VPC Access" or a "VPC Connector".
As of May 2020, Cloud Run (fully managed) has Beta support for the Serverless VPC Access. See Connecting to a VPC Network for more information.
Alternatives to using this Beta include:
Use Cloud Run for Anthos, where GKE provides the capability to connect to Memorystore if the cluster is configured for it.
Stay within fully managed Serverless but use a GA version of the Serverless VPC Access feature by using App Engine with Memorystore.
While waiting for serverless VPC connectors on Cloud Run - Google said yesterday that announcements would be made in the near term - you can connect to Memorystore from Cloud Run using an SSH tunnel via GCE.
The basic approach is the following.
First, create a forwarder instance on GCE
gcloud compute instances create vpc-forwarder --machine-type=f1-micro --zone=us-central1-a
Don't forget to open port 22 in your firewall policies (it's open by default).
Then install the gcloud CLI via your Dockerfile
Here is an example for a Rails app. The Dockerfile makes use of a script for the entrypoint.
# Use the official lightweight Ruby image.
# https://hub.docker.com/_/ruby
FROM ruby:2.5.5
# Install gcloud
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
# Generate SSH key to be used by the SSH tunnel (see entrypoint.sh)
RUN mkdir -p /home/.ssh && ssh-keygen -b 2048 -t rsa -f /home/.ssh/google_compute_engine -q -N ""
# Install bundler
RUN gem update --system
RUN gem install bundler
# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD ["bash", "entrypoint.sh"]
Finally open an SSH tunnel to Redis in your entrypoint.sh script
# !/bin/bash
# Memorystore config
MEMORYSTORE_IP=10.0.0.5
MEMORYSTORE_REMOTE_PORT=6379
MEMORYSTORE_LOCAL_PORT=6379
# Forwarder config
FORWARDER_ID=vpc-forwarder
FORWARDER_ZONE=us-central1-a
# Start tunnel to Redis Memorystore in background
gcloud compute ssh \
--zone=${FORWARDER_ZONE} \
--ssh-flag="-N -L ${MEMORYSTORE_LOCAL_PORT}:${MEMORYSTORE_IP}:${MEMORYSTORE_REMOTE_PORT}" \
${FORWARDER_ID} &
# Run migrations and start Puma
bundle exec rake db:migrate && bundle exec puma -p 8080
With the solution above Memorystore will be available to your application on localhost:6379.
There are a few caveats though
This approach requires the service account configured on your Cloud Run service to have the roles/compute.instanceAdmin role, which is quite powerful.
The SSH keys are backed into the image to speedup container boot time. That's not ideal.
There is no failover if your forwarder crashes.
I've written a longer and more elaborated approach in a blog post that improves the overall security and adds failover capabilities. The solution uses plain SSH instead of the gcloud CLI.
If you need something in your VPC, you can also spin up Redis on Compute Engine
It's more costly (especially for a Cluster) than Redis Cloud - but an temp solution if you have to keep the data in your VPC.