What is the easiest way to download a file out of an ECS container to local machine? - amazon-web-services

I need to do some heap dumps and it would be great to have an easy (and fast) way to get the files as seamless as possible.
Current way doing it is:
Create file
Optional: Upload SSH key to EC2 instance if not yet known(depending on security model used)
Open SSH session (not using ASM as it has some OsX flaws)
copy docker container file to EC2 instance
SCP the file to local machine
Cleanup
This seems overwhelming complex for getting a single file. Is there a more straight-forward way in doing it? As this is an on demand use case I'd be ok with manual AWS Console way or using tools to do it more convenient. THX

Related

How can i automate script executions in aws EC2 using go sdk?

I'm building an app that manages multiple ec2 instances using the go sdk. I would like to run scripts on these instances in an automated way.
How can I achieve that ? I don't think os.command => ssh => raw script stored as string in code is the best practice. Is there any clean way to achieve this ?
Thanks
Is there any clean way to achieve this ?
To bootstrap your instance, you would create a UserData script. The script runs only once, just after your instance is launched.
For other execution of commands remotely, you can use SSM Run Command to run command on a single or multiple instances.
The way you suggest is actually valid and can work. I agree with you though, it wouldn't be my first choice either. I would either use the package golang.org/x/crypto/ssh in the standard library or an external solution like github.com/appleboy/easyssh-proxy.
I would lean towards the default library but if you don't have a preference there then the Scp function of the latter package might be especially of interest to you. You can find examples of this in the readme of the project.
Secure copy protocol (SCP) is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. It is based on the Secure Shell (SSH) protocol.
EDIT: After seeing Marcin's answer, I think my answer is more the plain SSH answer, AWS independent. For the idiomatic answer for AWS please definitely look at his suggested solution!

How to retrieve heapdump in PCF using SMB

I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory .
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
Kindly help.
I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory
You don't need to set these options. The Java buildpack will take care of this for you. By default, it installs a jvmkill agent which will automatically do this.
https://github.com/cloudfoundry/java-buildpack/blob/main/docs/jre-open_jdk_jre.md#jvmkill
In addition, the jvmkill agent is smart enough that if you bind a SMB or NFS volume service to your application, it will automatically save the heap dumps to that location. From the doc link above...
If a Volume Service with the string heap-dump in its name or tag is bound to the application, terminal heap dumps will be written with the pattern <CONTAINER_DIR>/<SPACE_NAME>-<SPACE_ID[0,8]>/<APPLICATION_NAME>-<APPLICATION_ID[0,8]>/<INSTANCE_INDEX>--<INSTANCE_ID[0,8]>.hprof
The key is that you name the bound volume service appropriately, i.e. the name must contain the string heap-dump.
You may also do the same thing with non-terminal heap dumps using the Java Memory Agent that the Java buildpack can install for you upon request.
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
To retrieve the heap dumps you need to somehow access the file server. I say "somehow" because it entirely depends on what you are allowed to do in your environment.
You may be permitted to mount the SMB/NFS volume directly to your PC. You could then access the files directly.
You may be able to retrieve the files through some other protocol like HTTP or FTP or SFTP.
You may be able to mount the SMB or NFS volume to another application, perhaps using the static file buildpack, to serve up the files for you.
You may need to request the files from an administrator with access.
Your best best is to talk with the admin for your SMB or NFS server. He or she can inform you about the options that are available to you in your environment.

Is there a service in AWS that is equivalent to docker configs?

I have a WordPress site that is gonna be hosted using ECS in AWS.
To make the management even more flexible, I plan not to store service configurations (i.e. php.ini, nginx.conf) inside the docker image itself. I found that docker swarm offers "docker configs" for such. Are there any equivalent tools doing the same thing? (I know AWS Secrets Manager can handle docker secrets though)
Any advice or alternative approaches? thank you all.
The most similar you could use is probably AWS SSM Parameter store
You will need some logic to retrieve the values when you are running the image.
If you don't want to have the files also inside of the running containers, then you pull from Parameter Store, and add them to the environment, and you will need to do probably some work in the application to read from the environment (the application stays decoupled from the actually source of the config), or you can read directly from Param store in the application (easier, but you have some coupling in your image with Parameter store.
if your concern is only about not having the values in the image, but it is fine if they are inside of the running container, then you can read from Param Store and inject the values in the container inside of the usual location of the files, so for the application is transparent
As additional approaches:
Especially for php.ini and nginx.conf I like a simple approach that is having a separate git repo, with different config files per different environments.
You have a common docker image regardless of the environment
in build time, you pull the proper file for the enviroment, and either save as env variables, or inject in the container
And last: need to mention classic tools like Chef or Puppet, and also ansible. More complex and maybe overkill
The two ways that I store configs and secrets for most services are
Credstash which is combination of KMS and Dynamodb, and
Parameter Store which has already been mentioned,
The aws command line tool can be used to fetch from Parameter Store
and S3(for configs), while credstash is its own utility (quite useful and easy to
use) and needs to be installed separately.

How to read/get files from Google Cloud Compute Engine Disk without connecting into it?

I accidentally messed up the permissions of the file system, which shows the message sudo: /usr/local/bin/sudo must be owned by uid 0 and have the setuid bit set when attempting to use sudo, such as read protected files, etc.
Response from this answer (https://askubuntu.com/a/471503) suggest to login as root to do so, however I didn't setup a root password before and this answer (https://stackoverflow.com/a/35017164/4343317) suggest me to use sudo passwd. Obviously I am stuck in an infinite loop from the two answers above.
How can I read/get the files from Google Cloud Compute Engine's disk without logging in into the VM (I have full control of the VM instance and the disk as well)? Is there another "higher" way to login as root (such as from gcloud tool or the Google Cloud interface) to access the VM disk externally?
Thanks.
It looks like the following recipe may be of value:
https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk
What this article says is that you can shutdown your VM, detach its boot disk and then attach it as a data disk to a second VM. In that second VM you will have the ability to make changes. However, if you don't know what changes you need to make to restore the system to sanity, then as #John Hanley says, we might want to use this mounting technique to copy of your work and then destroy your tainted VM and recreate a new one fresh and copy back in your work and start from there.

Saving images on files or on database when using docker

I'm using docker for a project, the main focus for its usage is to make the application available even if one of the node (it's a 6 nodes cluster with docker swarm) is down.
The application is basically a Django App that can save some images from users and others models. I'm currently saving the images as files, but since I need to specify a volume locally for a single machine, I would like to know if it would be better to save the images on database cluster, so it would be available even if the whole node goes down. Or is there another way?
#Edit
Note: The cluster runs locally and doesn't have internet access
The two options are two perform the file sharing via database or via the file system.
For file system sharing, you can use something like GlusterFS, so for each container it seems like they are mounting a host-local volume, but it's actually shared via GlusterFS between the hosts.
To my mind, if it's your application (e.g you can modify it at will), saving stuff in database would be the easier approach for most developers.
The best solution is often to go for a hosted option (such as MongoDB Atlas). Making a database resilient and highly available is really hard, and unless you are an expert on docker and mongo I would strongly recommend you to go for a hosted option.