How to get files stored in Elastic Beanstalk instance? - django

I have a django (1.10) app running in Elastic Beanstalk.
I want to dump some apps data to fixtures and download these fixtures to my local machine (to replicate in my local database).
So far, I've eb ssh'ed into my instance and dumped the data to ~/myapp_current.json.
But I can not find a way to copy the file to my local machine. There is no eb scp command.

When you run eb ssh locally, eb will print out the actual SSH command it's running. For instance:
INFO: Running ssh -i /Users/me/.ssh/aws.pem ec2-user#3.4.5.6
Just copy that ssh command, change ssh to scp, and then add the rest, and run it locally:
scp -i /Users/me/.ssh/aws.pem ec2-user#3.4.5.6:myapp_current.json ./

At your Environment there is the option "Application versions", in there you obtain a list of all the versions of your application that you had been upload. You can select the desired version and download it

Related

AWS System Manager Run Command on EC2 Failes

I'm running a fastapi server on ec2 ubuntu. Everything work fine when I ssh in to ec2 and run commands, but I want the server to be running when my local machine is off.
So, I tried AWS System manager's run command. The connection looks fine but when I cd to the server code and run ls it outputs nothing. Also, when I do poetry run python main.py in the server folder, which works totally perfect when I ssh in to the server from my local machine, it says poetry: not found.
Why is this happening. And is there another way I can run my server while being able to turn off my local machine.
There is not any kind of relation between your machine and your server in the cloud, and your ec2 its still alive and runs your services whenever you want

Cannot access to localhost

I deployed an application on Google Cloud (GKE). In order to access its UI, I did port-forwarding(port 9090). When I use Cloud Shell web preview I can access the UI. However, when I tried to open localhost:9090 in my browser, I cannot access. Do you know why I cannot access from my browser, is it normal?
Thank you!
Answered provided in the comments by a community member.
Do you know why I cannot access from my browser, is it normal?
Cloud Shell is where you're running kubectl port-forward. Port forwarding only applies to the host on which the command is run unless you have a chain of port-forwarding commands. If you want to access the UI from your local host, then you will need to run the kubectl port-forward on your local host too.
So how can I can run kubectl port-forward command on my local host for the application that I deployed cloud? Should I install Google Cloud CLI on my local machine?
I assumed (!) that you're using kubectl port-forward on Cloud Shell. If that's correct, then you need to install kubectl on your local machine to run it there. Because of the way that GKE authenticates, it may also be prudent to install gcloud on your local machine. You can then use gcloud container clusters get-credentials ... to create a local Kubernete (GKE) config file on your local machine that is then used by kubectl commands.

Jenkins not connecting to AWS EC2 instance via SSH

I am trying to connect to an EC2 instance from Jenkins via SSH. I always get failure in the end. I am storing the SSH key in a global credential.
This is the task and shell, using SSH agent plugin
This is how I store the key (the whole key has been pasted in)
If I am using SSH connection from my local PC, everything is fine. I am a newbie in Jenkins so this is very chaotic for me.
you need to use SSH plugin . download the plugin using Manage Jenkins and configure
the ec2 in SSH remote.
follow the steps in this link
https://www.thesunflowerlab.com/blog/jenkins-aws-ec2-instance-ssh/

how to access self managed docker registry hosted on AWS EC2 from windows machine?

I want to setup a self managed docker private registry on an EC2 instance without using AWS ECR/ECS services i.e. using the docker registry:2 container image and make it accessible to the development team so that they can push/pull docker images remotely.
The development team has windows laptop with "docker for windows" installed in it.
Please note:
The EC2 instance is hosted on private subnet.
I have already created a AWS-ALB with openssl self-signed certificate and attached it to the EC2 so that the server can be accessed over HTTPS Listener.
I have deployed docker registry using below command:
docker run -d -p 8080:5000 --restart=always --name registry registry:2
I think pre-routing of 443 to 8080 is done because when I hit the browser with
https:///v2/_catalog I get an output in json format.
Currently, the catalog is empty because there is no image pushed in the registry.
I expect this docker-registry hosted on AWS-EC2 instance to be accessible remotely i.e. from windows remote machine as well.
Any references/suggestions/steps to achieve my task would be really helpful.
Hoping for a quick resolution.
Thanks and Regards,
Rohan Shetty
I have resolved the issue by following the below steps:
added --insecure-registry parameter in the docker.service file
created a new directory "certs.d/my-domain-name" at path /etc/docker.
( Please note: Here domain name is the one at which docker-registry is to be accessed)
Placed the self-signed openssl certificate and key for the domain-name inside the above mentioned directory
restart docker

How can I create Docker certificates when running on EC2 instances?

I'm using Terraform to spin up an EC2 instance. Then, I run an Ansible playbook to configure the instance and run docker-compose.
I noticed that Ansible's docker_service requires key_path and cert_path to be set when running against a remote Docker daemon. I'd like to use a remote connection, as the Docker images required have already been built locally as part of my CI/CD process.
How can I generate the cert and key for an EC2 instance?
I found this question: How do I deploy my docker-compose project using Terraform?, but it uses remote-exec from Terraform and assumes the project source will exist on the remote machine.
There are a few questions which provide reasonable workflows, but they all involve using docker-machine which is not quite what I want.
I believe I found it:
https://docs.docker.com/engine/security/https/
This document describes how to create the CA, server key-cert, and client key-cert.
And, the following document describes how to make a cert using Ansible:
https://docs.ansible.com/ansible/2.4/openssl_certificate_module.html