I'm using AWS to host a static website. Unfortunately, it's very tedious to upload the directory to S3. Is there any way to streamline the process?
Have you considered using AWSCLI - AWS Command Line Interface to interact with AWS Services & resources.
Once you install and configure the AWSCLI; to update the site all that you need to do is
aws s3 sync s3://my-website-bucket /local/dev/site
This way you can continue developing the static site locally and a simple aws s3 sync command line call would automatically look at the files which have changed since the last sync and automatically uploads to S3 without any mess.
To make the newly created object public (if not done using Bucket Policy)
aws s3 sync s3://my-website-bucket /local/dev/site --acl public-read
The best part is, the multipart upload is built in. Additionally you sync back from S3 to local (the reverse)
Related
I have did a number of searches and can't seem to understand if this is doable at all.
I have a data logger that has FTP-push function. The FTP-push function have the following settings:
FTP server
Port
Upload directory
User name
Password
In general, I understand that a Filezilla client (I have a Pro edition) is able to drop files into my AWS S3 bucket and I had done this successfully in my local PC.
Is it possible to remove the Filezilla client requirement and input my S3 information directly into my data logger? Something like the below diagram:
Data logger ----FTP----> S3 bucket
If not, what will be the most sensible method to have my data logger JSON files drop into AWS S3 via FTP?
Frankly, you'd be better off with:
Logging to local files
Using a schedule to copy the log files to Amazon S3 using the aws s3 sync command
The schedule could be triggered by cron (Linux) or a Scheduled Task (Windows).
Amazon did add support recently to AWS Transfer for FTP support. This will provide an integration with Amazon S3 via FTP without setting up any additional infrastructure, however you should review the pricing at the moment.
As an alternative you could create an intermediary server that can sync between itself and AWS S3 using the cli aws s3 sync.
I want to sync an AWS S3 bucket up with files on a remote non-AWS server. I have proper access to both the remote server and the EC2 instance that has access to the S3 bucket. What is the best way to do this?
I looked at the docs for the aws s3 sync command, and it looks like you can only sync up an S3 bucket with files locally on the server that has access to the S3 bucket.
The problem is, I have files on a remote server that I want to sync up with an S3 bucket, but that server is not an AWS EC2 instance.
I am able to use the rsync command to get the files from the remote server onto the AWS server that has access to the S3 bucket, but if I do the rsync command and then the aws s3 sync command, then it becomes a two-step process to move the files, takes about twice as long, and because there are a lot of files, I'd also have to increase the size of the EC2 instance volume to do all the files at once. All not ideal.
As such, is there a way to sync up an S3 bucket with a remote server that is not an AWS server and that does not have access to the S3 bucket by using an EC2 instance that does have access to the S3 bucket as an intermediary? Thank you.
The simplest approach would be to use sshfs. This link has detailed instructions, but the basic process is as follows:
Create a local directory where you'll mount the remote system, such as /tmp/syncmount
Run sshfs USER#REMOTE:DIRECTORY /tmp/syncmount
Run s3sync /tmp/syncmount s3://YOUR_BUCKET
I'm assuming that if you have rsync access you'll have general SSH access.
I have a workflow need. I have a customer that does not want to deal with our S3 folders where we drop their files. They want us to send the files directly to their SFTP account. When I unload files from my backend they automatically unload to S3 from AWS services. As this is a one time request per customer I don't wish to set up an automated transfer protocol in a Lamda or bash script. nor do I wish to go through the hassle of copying the file to my local server only to post it to the SFTP site. I would prefer to just right click on the file and select to transfer to SFTP location. Does anyone know if AWS has any plans to add file transfer protocol support into the S3 console UI? (SFTP, FTP, etc.)
What would be even better is if AWS S3 allowed all files dropped in an S3 bucket location to be automatically transferred to the SFTP location defined -- in the scenario where the customer never wishes to deal with S3, but we need to use it.
Given the current capabilities of Amazon S3, automating a send of files from Amazon S3 to an SFTP target would require the use of an AWS Lambda function.
There are a few ways to do this, since you are looking for the most easiest way i would suggest you to install s3fuse on a linux server, this enables you to mount s3 as a file system. You can directly mount it on the sftp server and copy them locally , below is the URL for s3Fuse.
https://cloud.netapp.com/blog/amazon-s3-as-a-file-system
The other method would be to use the AWS CLI to do recursive copy , this would involve installing AWS CLI and generate API keys. Below is an example of the command.
aws s3 cp s3://mybucket/test.txt test2.txt
You can revoke the API keys once you are done with the transfer!
As per my project requirement, I want to fetch some files from on-prem FTP server & put them into a S3 bucket. Files are of size 1-2 GB. Once the file will be put into the FTP server folder, I want that file to be uploaded to S3 bucket.
Please suggest the easiest way to achieve this?
Note- Mostly the files will be put into FTP server only once in a day, hence i dont want continuously scan the FTP server. once the files will be uploaded to S3 from FTP server, i want to terminate any resources (like EC2) created in AWS.
These are my ideas:
I think you could create an agent on your FTP server that will upload the files every N seconds/minutes/hours/Etc using the AWS CLI. This way you're avoiding external access to your FTP server.
Another approach is a Lambda function for pulling process, but like you said the FTP server doesn't allow external access.
Create a VPN between your on-prem and the cloud infra, create a Cloudwatch event and through a Lambda execute the pulling process.
Here you can configure a timeout:
Create a VPN between your on-prem and the cloud infra, from your FTP server upload the files using AWS CLI (pay attention to sync option). Take a look at this link: https://aws.amazon.com/answers/networking/accessing-vpc-endpoints-from-remote-networks/
With Jenkins create a task to execute a process that will upload the files.
You can use Storage gateway, visit its site here: https://aws.amazon.com/es/storagegateway/
Here is how we solved it.
Enable S3 acceleration on your S3 bucket. This is very much needed, since you are pushing large file.
If you have access to the server install aws cli and perform a sync on the folder to s3 bucket. AWS CLI will automatically sync your folder to bucket. This way if you change any of your existing files, it will keep in sync with S3 bucket. This is ideal and simplest way if you have access to the server and able to install aws cli.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html#transfer-acceleration-examples-aws-cli
aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-configuration Status=Enabled
If you want to enable for specific or default profile,
aws configure set default.s3.use_accelerate_endpoint true
If you don't have access to ftp server in your premisis, you need an external server to perform this process. In this case you need to perform a poll or share file system, copy the file locally and move it to s3 bucket. There will be lot of failure points with this process.
Hope it helps.
I have to upload some static HTML and CSS files to Amazon S3, and have been given an Access Key ID as well as a Secret Access Key.
I've signed up for AWS, how to I upload stuff?
If you are comfortable using the command line, the most versatile (and enabling) approach for interacting with (almost) all things AWS is to use the excellent AWS Command Line Interface (AWS CLI) - it meanwhile covers most services' APIs, and it also features higher level S3 commands that ease dealing with your use case considerably, see the AWS CLI reference for S3 (the lower level commands are in s3api) - specifically you are likely interested in:
cp - Copies a local file or S3 object to another location locally or in S3
sync - Syncs directories and S3 prefixes.
I use the latter to deploy static websites hosted on S3 by simply syncing what's changed, convenient and fast. Your use case is covered by the first of several Examples (more fine grained usage with --exclude, --include and prefix handling etc. is available):
The following sync command syncs objects under a specified prefix and
bucket to files in a local directory by uploading the local files to
s3. [...]
aws s3 sync . s3://mybucket
While the AWS CLI supports the regular AWS Credentials handling via environment variables, you can also configure Multiple Configuration Profiles for yourself and other AWS accounts and switch as needed:
The AWS CLI supports switching between multiple profiles stored within the configuration file. [...] Each profile uses different credentials—perhaps from two different IAM users—and also specifies a different region. The first profile, default, specifies the region us-east-1. The second profile, test-user, specifies us-west-2. Note that, for profiles other than default, you must prefix the profile name with the string, profile.
Assuming you want to upload to S3 storage, there are some good free apps out there. If you google for "CloudBerry Labs" they have a free "S3 Explorer" application which lets you drag and drop your files to your S3 storage. When you first install and launch the app, there will be a place to configure your connection. That's where you'll put in your AccessKey and SecretKey.
Apart from the AWS-CLI, there are a number of 'S3 browsers'. These act very much like FTP clients, showing folder structure and files on the remote store, and allow you to interact much like FTP by uploading and downloading.
This isn't the right forum for recommendations, but if you search the usual places for well received s3 browsers you'll find plenty of options.
To upload a handful of files to S3 (the cloud storage and content distribution system), you can log in to use the AWS console S3 application.
https://console.aws.amazon.com/console/home?#
There's also tonnage of documentation on AWS about the various APIs.