i have s3cmd and EC2 api pre configured AMI. While creating new instance with user data for downloading files from S3 bucket, i face some following problems.
in user data i have code for
- creating new directory on new instance.
- downloading file from AWS S3 bucket.
Script is
#! /bin/bash
cd /home
mkdir pravin
s3cmd get s3://bucket/usr.sh >> temp.log
But in above script , mkdir pravin creates new directory with name pravin but s3cmd get s3://bucket/usr.sh not downloads file from AWS S3 bucket.
it also creates temp.log but it is empty.
So how i can solve this problem ?
An alternative solution would be to use an instance that has an IAM role assigned to it and the aws-cli, which would require that you have Python installed. All of this could be accomplished by inserting the following in the user-data field for your instance:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
mkdir pravin
aws s3 cp s3://bucket/usr.sh temp.log --region {YOUR_BUCKET_REGION}
NOTE: The above is applicable for Ubuntu only.
And then for your instances IAM role you would attach a policy like so:
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::YourBucketName/*"
}
]
}
I suspect that the user running the user data script lacks a .s3cfg file. You may need to find a way to indicate the location of the file when running this script.
Related
I am working on an Ubuntu 22.04 desktop using AWS CLI. I am trying to upload ALL files located in a specific local directory to our S3 bucket using AWS CLI but I'm getting an error. Here is my command and the error:
ms#ms01:~$ aws s3 cp /home/ms/Downloads/TWU/mp3/ s3://abc.org/v2/ –-recursive
The error I'm getting is: Unknown options: --recursive
Any help/direction would be appreciated. Thanks.
Try putting --recursive earlier in the command:
aws s3 cp --recursive /home/ms/Downloads/TWU/mp3/ s3://abc.org/v2/
Alternatively, the sync command always includes all sub-directories and only copies files that are not already in the destination (so it can be run multiple times to only copy new/changed files):
aws s3 sync /home/ms/Downloads/TWU/mp3/ s3://abc.org/v2/
You might be using an old version of awscli. Try Updating it through: pip install awscli -U
I have to mount the s3 bucket over docker container so that we can store its contents in an s3 bucket.
I found https://www.youtube.com/watch?v=FFTxUlW8_QQ&ab_channel=ValaxyTechnologies video which shows how to do the same process for ec2 instance instead of a docker container.
I am following the same steps as mentioned in the above link. Likewise, I have done the following things on the docker container:
(Install FUSE Packages)
apt-get install build-essential gcc libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support pkg-config libxml++2.6-dev libssl-dev
git clone https://github.com/s3fs-fuse/s3fs-fus...
cd s3fs-fuse
./autogen.sh
./configure
make
make install
(Ensure you have an IAM Role with Full Access to S3)
(Create the Mountpoint)
mkdir -p /var/s3fs-demo-fs
(Target Bucket)
aws s3 mb s3://s3fs-demo-bkt
But when I trying to mount the s3 bucket using
s3fs s3fs-demo-bkt /var/s3fs-demo-fs -o iam_role=
I am getting the following messege
fuse: device not found, try 'modprobe fuse' first
I have looked over several solutions for this problem. But I am not able to resolve this issue. Please let me know how I can solve it.
I encountered the same problem. But later the issue was fixed by adding --privileged when running the docker run command
I am wondering if there is a straightforward way in docker to build an image that has both the aws cli and gsutil cli installed on it for use. Unfortunately, an s3 name containing periods creates a Host ... returned an invalid certificate error https://github.com/GoogleCloudPlatform/gsutil/issues/267 and I cannot change the s3 bucket name unfortunately, which means I cannot do the following
gsutil -m cp -r "s3://path.with.periods/path/files" "gs://bucket_path/path"
so instead Ill have to do something like
aws s3 cp --recursive --quiet "s3://path.with.periods/path/files" ./
gsutil -m cp -r "./" "gs://bucket_path/path"
but I was wondering if there was a straightforward dockerfile that could run these commands?
Actually I am working on a pipeline. So I am having a scenario where I am pushing some artifacts into s3. Now I have wrote a shell script which download the folder and copy each file to its desired location in a wildfly server(Ec2 instance).
#!/bin/bash
mkdir /home/ec2-user/test-temp
cd /home/ec2-user/test-temp
aws s3 cp s3://deploy-artifacts/test-APP test-APP --recursive --region us-east-1
aws s3 cp s3://deploy-artifacts/test-COMMON test-COMMON --recursive --region us-east-1
cd /home/ec2-user/
sudo mkdir -p /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-COMMON/standalone/configuration/standalone.xml /opt/wildfly/standalone/configuration
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/microsoft/* /opt/wildfly/modules/system/layers/base/com/microsoft/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/mysql /opt/wildfly/modules/system/layers/base/com/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/psg/common/* /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-APP/standalone/deployments/HS.war /opt/wildfly/standalone/deployments
sudo cp -rf ./test-temp/test-APP/bin/resource /opt/wildfly/bin/resource
sudo cp -rf ./test-temp/test-APP/modules/system/layers/base/psg/* /opt/wildfly/modules/system/layers/base/psg/
sudo cp -rf ./test-temp/test-APP/standalone/deployments/* /opt/wildfly/standalone/deployments/
sudo chown -R wildfly:wildfly /opt/wildfly/
sudo service wildfly start
But every time I push new artifacts into s3. I have to go to the server and run this script manually. Is there a way to automate it? I was reading about lamda but after lamda knows the change in s3. where I am gonna define my shell script to run??
Any guidance will be help full.
To Trigger the lambda function on file uploading to s3 bucket, for this you have to set the event notification in s3 bucket.
Steps for setting up s3 event notification:-
1- you lambda and s3 bucket should be in the same region
2 - go to Properties tab of s3 bucket
3 - open up the Event and provide values for event types like put or copy
4 - Do specify the Lambda ARN in Send to option.
Now create one lambda function and add the s3 bucket as a trigger option. Just make sure your Lambda IAM policy is properly set.
Here's the current scenario -
I have multiple S3 Buckets, which have SQS events configured for PUTs of Objects from a FTP, which I have configured using S3FS.
Also, I have multiple Directories on an EC2, on which a User can PUT an object, which gets synced with the different S3 buckets (using S3FS), which generate SQS events(using S3's SQS events).
Here's what I need to achieve,
Instead of Multiple S3 buckets, I need to consolidate the logic on Folder level,
ie. I have now created Different Folders for each Bucket that I had created previously, I have created separate SQS events for PUT in individual Folders.
Now the Bucket level logic of S3FS, I want to tweak for Folder level in a Single S3 bucket.
ie. I want to create 3 different Directories oon the EC2, eg A,B,C.
If I PUT an object in Directory A of the EC2, the object must get synced with Folder A in the S3 bucket,
Similarly for Directory B and folder B of S3 and Directory C on EC2 and Folder C on the S3.
Here are the steps I created for installing S3FS -
Steps -
ssh into the EC2
sudo apt-get install automake autotools-dev g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make
sudo make install
Mounting S3 Bucket to File System
echo access-key-id:secret-access-key > /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs
mkdir /mnt/bucketname
echo s3fs#bucketname /mnt/bucketname fuse _netdev,rw,nosuid,nodev,allow_other 0 0 >> /etc/fstab
mount -a
Now these steps achieve sync between a particular Directory on the EC2 and the S3 bucket,
How do I tweak this to sync say 2 different Directories on the EC2 with 2 different Folders on the S3.
I am a Linux and AWS newbie, Please help me out.
Do not mount the S3 bucket to the file system. Use AWS S3 CLI and Cron to Sync the EC2 Directory with the S3 Bucket Directory.
Install S3CMD on the EC2 instance (http://tecadmin.net/install-s3cmd-manage-amazon-s3-buckets/#)
Start a cron job for achieving the Sync with the local directory and the S3 Bucket Subfolder.
Create a Script File for example "script.sh"
#/bin/bash
aws s3 sync /path/to/folder/A s3://mybucket/FolderA
aws s3 sync /path/to/folder/B s3://mybucket/FolderB
aws s3 sync /path/to/folder/C s3://mybucket/FolderC
Start a cron job for some thing like this:
* * * * * /root/scripts/script.sh
And you will achieve your use case.