place a file permanently on cluster launched through dag - amazon-web-services

First of all I want to create a folder named as ".ssh" in the cluster and after the creation of it I want it to be there permanently, this folder will be created in the directory i.e. "/home/ssm-user/" and then I want to place a private key in this folder which also needs to be permanent.
Currently I have to do this task manually every time I launch the cluster.
I have to open the aws-cli through session manager and then navigate to the directory mentioned above, then I create a folder ".ssh" with command "mkdir", once the folder has been created then I put my file in it in order to use that file.
But as already mentioned I only want to do this procedure once and after that it must be there whenever I launch my cluster.
Or, if there is any other process to craft that folder and place the file in it with the launch then it will really very helpful.

Related

Where is a sensible place to put kube_config.yaml files on MWAA?

The example code in the MWAA docs for connecting MWAA to EKS has the following:
#use a kube_config stored in s3 dags folder for now
kube_config_path = '/usr/local/airflow/dags/kube_config.yaml'
This doesn't make me think that putting the kube_config.yaml file in the dags/ directory is a sensible long-term solution.
But I can't find any mention in the docs about where would be a sensible place to store this file.
Can anyone link me to a reliable source on this? Or make a sensible suggestion?
From KubernetesPodOperator Airflow documentation:
Users can specify a kubeconfig file using the config_file parameter, otherwise the operator will default to ~/.kube/config.
In a local environment, the kube_config.yaml file can be stored in specific directory reserved for Kubernetes (e.g. .kube, kubeconfig). Reference: KubernetesPodOperator (Airflow).
In the MWAA environment, where DAG files are stored in S3, the kube_config.yaml file can be stored anywhere in the root DAG folder (including any subdirectory in the root DAG folder, e.g. /dags/kube). The location of the file is less important than explicitly excluding it from DAG parsing via the .airflowignore file. Reference: .airflowignore (Airflow).
Example S3 directory layout:
s3://<bucket>/dags/dag_1.py
s3://<bucket>/dags/dag_2.py
s3://<bucket>/dags/kube/kube_config.yaml
s3://<bucket>/dags/operators/operator_1.py
s3://<bucket>/dags/operators/operator_2.py
s3://<bucket>/dags/.airflowignore
Example .airflowignore file:
kube/
operators/

Can I programatically retrieve the directory an EFS Recovery Point was restored to?

I'm trying to restore data in EFS from recovery points managed by AWS Backup. It seems AWS Backup does not support destructive restores and will always restore to a directory in the target EFS file system, even when creating a new one.
I would like to sync the data extracted from such a recovery point to another volume, but right now I can only do this manually as I need to lookup the directory name that is used by the start-restore-job operation (e.g. aws-backup-restore_2022-05-16T11-01-17-599Z), as stated in the docs:
You can restore those items to either a new or existing file system. Either way, AWS Backup creates a new Amazon EFS directory (aws-backup-restore_datetime) off of the root directory to contain the items.
Further looking through the documentation I can't find either of:
an option to set the name of the directory used
the value of directory name returned in any call (either start-restore-job or describe-restore-job)
I have also checked how the datetime portion of the directory name maps to the creationDate and completionDate of the restore job but it seems neither match (completionDate is very close, but it's not the exact same timestamp).
Is there any way for me to do one of these two things? Both of them missing make restoring a file system from a recovery point in an automated fashion very hard.
Is there any way for me to do one of these two things?
As it stands, no.
However, since we know that the directory will always be in the root, doing find . -type d -name "aws-backup-restore_*" should return the directory name to you. You could also further filter this down based on the year, month, day, hour & minute.
You could have something polling the job status on the machine that has the EFS file system mounted, finding the correct directory and then pushing that to AWS Systems Manager Parameter Store for later retrieval. If restoring to a new file system, this of course becomes more difficult but still doable in an automated fashion.
If you're not mounting this on an EC2 instance, for example, running a Lambda with the EFS file system mounted, will let you obtain the directory & then push it to Parameter Store for retrieval elsewhere. The Lambda service mounts EFS file systems when the execution environment is prepared - in other words, during the 'cold start' duration so there are no extra costs here for extra invocation time & as such, would be the cheapest option.
There's no built-in way via the APIs however to obtain the directory or configure it so you're stuck there.
It's an AWS failure that neither do they return the filename that they use in any way nor does any of the metadata returned - creationDate/completionData - exactly match the timestamp they use to name the file.
If you're an enterprise customer, suggest this as a missing feature to your TAM or SA.

Is it possible to execute the modified user data of an existing EC2 once?

In the AWS website https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html, it shows how to modify the instance user data of an existing EC2.
However, at the end it says the modified user data will not be executed (7). What's the point to modify the user data and not executed? Is it possible to execute the modified user data once?
To modify instance user data
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the navigation pane, choose Instances.
Select the instance and choose Instance state, Stop instance. If this
option is disabled, either the instance is already stopped or its root
device is an instance store volume.
When prompted for confirmation, choose Stop. It can take a few minutes
for the instance to stop.
With the instance still selected, choose Actions, Instance settings,
Edit user data.
Modify the user data as needed, and then choose Save.
Restart the instance. The new user data is visible on your instance
after you restart it; however, user data scripts are not executed.
The User Data field was created as a way to pass information to an instance. For example, providing a password to a database, or configuration information.
Then, the Ubuntu community came up with the clever idea of passing a script via the User Data field, and having some code on the instance execute the script when the system boots. This has enabled "self-configuring" systems, and is called cloud-init.
By default, the User Data script only executes once per instance, with the intention of installing software.
From Boot Stages — cloud-init documentation:
cloud-init ships a command for manually cleaning the cache: cloud-init clean
Running this command will 'forget' the previous runs, and will execute the User Data script on the next boot.
It is also possible to run a script on every boot by placing the script in /var/lib/cloud/scripts/per-boot/.
Simple step to step working Solution:
Stop the Instance
Go to Action > Instance Setting > Edit User Data
And make sure to choose As Text in Edit User Data Screen
Add commands on Textarea
Now start the Instance And check that it is on Running State
check your public IP (This has been changed after restart)
Now finally connect to the instance using ssh :
ssh -i #<IP_ADDRESS>

wso2 api-m running i docker as non root user

I am looking into running the wso2-am in openshift.
I am trying to run AM but it keeps failing because missing permission to write to the file system.
Unable to create the directory
[/opt/wso2/wso2am-2.1.0/repository/deployment/server/webapps/am#sample#calculator#v1]
Unable to create the directory
[/opt/wso2/wso2am-2.1.0/repository/deployment/server/webapps/authenticationendpoint]
All examples I see the container is running as root but we want to avoid that and run it as USER 1010.
Can you set a value to make it write to a specified location.
Running it as user with uid 1010 will not help either. You need to set up file system permissions so that directories and files you need to write to have group root and are writable by group.
This is necessary because by default under OpenShift your application will run as an assigned uid unique to your project. This is outside of the range of what would be in the /etc/passwd file and you cannot predict what it will be in advance. Because it isn't in /etc/passwd then it falls back to running as group root, thus why you need to satisfy the requirement of file system permissions being group root and writable by group.

How to set up different uploaded file storage locations for Laravel 5.2 in local deployment and AWS EB w/ S3?

I'm working on a Laravel 5.2 application where users can send a file by POST, the application stores that file in a certain location and retrieves it on demand later. I'm using Amazon Elastic Beanstalk. For local development on my machine, I would like the files to store in a specified local folder on my machine. And when I deploy to AWS-EB, I would like it to automatically switch over and store the files in S3 instead. So I don't want to hard code something like \Storage::disk('s3')->put(...) because that won't work locally.
What I'm trying to do here is similar to what I was able to do for environment variables for database connectivity... I was able to find some great tutorials where you create an .env.elasticbeanstalk file, create a config file at ~/.ebextiontions/01envconfig.config to automatically replace the standard .env file on deployment, and modify a few lines of your database.php to automatically pull the appropriate variable.
How do I do something similar with file storage and retrieval?
Ok. Got it working. In /config/filesystems.php, I changed:
'default' => 'local',
to:
'default' => env('DEFAULT_STORAGE') ?: 'local',
In my .env.elasticbeanstalk file (see the original question for an explanation of what this is), I added the following (I'm leaving out my actual key and secret values):
DEFAULT_STORAGE=s3
S3_KEY=[insert your key here]
S3_SECRET=[insert your secret here]
S3_REGION=us-west-2
S3_BUCKET=cameraflock-clips-dev
Note that I had to specify my region as us-west-2 even though S3 shows my environment as Oregon.
In my upload controller, I don't specify a disk. Instead, I use:
\Storage::put($filePath, $filePointer, 'public');
This way, it always uses my "default" disk for the \Storage operation. If I'm in my local environment, that's my public folder. If I'm in AWS-EB, then my Elastic Beanstalk .env file goes into effect and \Storage defaults to S3 with appropriate credentials.