I have an AWS S3 Bucket holding a development website. I would like to FTP(SSL) into the S3 Bucket, and also be able to create username and password credentials for others. Is this possible, and how can I do this?
Thanks!
Before giving up on S3 remember that sometimes frustration with a new product or technology comes from lack of knowledge and experience. The Amazon Cloud platform has some amazing services to work with.
FTP is an old technology that is not as popular today. The new style is using REST interfaces. S3 supports REST. Also you can easily copy files to / from S3 using command line tools. Look into the AWS Command Line Interface (CLI). Link below.
If your goal is to use S3 as your source repository look into AWS CodeCommit. Very similar to GIT. There is also CodePipeline, CodeBuild and CodeDeploy. Combine these tools with other Amazon services such as CloudFormation and you have real developer power.
AWS Command Line Interface
AWS Code Services
AWS CloudFormation
Related
Sorry, I’m sort of a newbie when it comes to Amazon AWS Cloud so sorry if I sound naïve.
For .NET developers, I’ve used Visual Studio 2019’s AWS Lambda project to code Lambda functions and ultimately deploy it to Amazon AWS cloud
However, my concern is that there is No way to version and/or back up the Configurations for the AWS Services ( i.e., S3 bucket, Amazon SNS & SQS, etc.) that are invoked and/or trigger the various AWS Lambda Functions
The problem is that IT developers who configure said AWS Services have to use the ADFS AWS Services console website’s GUI to configure the various AWS Services, and if someone mistakenly deletes an AWS Services then they lose the configuration settings as well?
How do we go about versioning and/or backing up Configurations for the AWS Services?
There are Infrastructure as a Code frameworks like Terraform and Ansible designed to address that.
You can't really delete an AWS service.
it seems like you guys are kind of "new" to AWS so I will recommend using CloudFormation templates as Infrastructure As Code tool. All the configuration of how your AWS resources are supposed to look like can be added to the template and you deploy the template to create your AWS resources. Its AWS-Native and does not cost you anything.
On top of it, you also want to add your CloudFormation templates to version control system.
I am still new to the cloud and when I first started I used Clever Cloud.
But now I want to migrate to AWS, and I have data that I want to move from Cellar to Amazon S3.
I am not sure what are the conventions on this or the best practices, and if anyone can help with documentation or explanation on how I can proceed is very much appreciated.
Thank you very much.
Clever Cloud Cellar is an Amazon S3 compatible service. This means it operates pretty much the same as S3.
Clever Cloud is not able to communicate directly to Amazon S3, and Amazon S3 is not able to communicate directly to Cellar. Therefore, you will need to:
Download the files from Cellar using s3cmd or the AWS Command-Line Interface (CLI) (see instructions on CleverCloud Cellar website)
Upload the files to an Amazon S3 bucket using the AWS CLI and your AWS credentials
This activity would be most efficient if performed from an Amazon EC2 instance since it has high bandwidth connectivity to Amazon S3.
Note that there will be Data Transfer costs from Clever Cloud for "Outbound traffic".
I suggest you start by getting s3cmd or the AWS CLI working with Cellar to download a single file, and then get the AWS CLI working with Amazon S3 to upload a single file. You can then use the sync command to copy whole directories of files.
I am working on a pet project based on multi-cloud (AWS and GCP) which is based on serverless architecture.
Now there are files generated by the business logic within GCP (using Cloud Functions and Pub/Sub) and they are stored in GCP Cloud storage. I want to ingest these files dynamically to AWS S3 bucket from the Cloud Storage.
One possible way is by using the gsutil library (Exporting data from Google Cloud Storage to Amazon S3) but this would require a compute instance, and run the gsutil commands manually which I want to avoid.
In answering this I'm reminded a bit of a Rube Goldberg type setup but I don't think this is too bad.
From the Google side you would create a Cloud Function that is notified when a new file is created. You would use the Object Finalize event. This function would get the information about the file and then call an AWS Lambda fronted by AWS API Gateway.
The GCP Function would pass the bucket and file information to the AWS Lambda. On the AWS side you would have your GCP credentials and the GCP API download the file and upload it to S3.
Something like:
All serverless on both GCP and AWS. Testing isn't bad as you can keep them separate - make sure that GCP is sending what you want and make sure that AWS is parsing and doing the correct thing. There is likely some authentication that needs to happen from the GCP cloud function to API gateway. Additionally, the API gateway can be eliminated if you're ok pulling AWS client libraries into the GCP function. Since you've got to pull GCP libraries into the AWS Lambda this shouldn't be much of a problem.
I would like to know if there is a way to sync AWS S3 buckets using a out of the box tool instead of AWS CLI commands like aws s3 sync or a server or lambad to run a script for the same.
Basically I would like to know if there is an AWS tool that does this exactly and keep the two buckets synced regularly or if there a third party application which would do the same.
Yes, S3 has a built-in replication feature, which supports replicating across accounts.
I've been using AWS Codedeploy using github as the revision source. I have couple of configuration files that contains credentials(e.g. NewRelic and other third party license key) which I do not want to add it to my github repository. But, I need them in the EC2 instances.
What is a standard way of managing these configurations. Or, what tools do you guys use for the same purpose?
First, use IAM roles. That removes 90% of your credentials. Once you've done that, you can store (encrypted!) credentials in an S3 bucket and carefully control access. Here's a good primer from AWS:
https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2
The previous answers are useful for managing AWS roles/credential specifically. However, your question is more about general non-AWS credentials, and how to manage them securely using AWS.
What works well for us is to secure the credentials in a properties file in a S3 bucket. Using same technique as suggested by tedder42 in A safer way to distribute AWS credentials to EC2, you can upload your credentials in a properties file into a highly secured S3 bucket, only available to your instance, which has been configured with the appropriate IAM role.
Then using CodeDeploy, you can add a BeforeInstall lifecycle hook to download the credential files to a local directory via the AWS CLI. For example:
aws s3 cp s3://credentials-example-com/credentials.properties
c:\credentials
Then when the application starts, it can read those credentials from the local file.
Launch your EC2 instances with an instance profile and then give the associated role access to all the things your service needs access to. That's what the CodeDeploy agent is using to make calls, but it's really there for any service you are running to use.
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html