I'm planning to create a small public AMI for converting text files to PDF (I find nothing satisfying on the Store) and I have an issue.
I understood that AMI is nothing else than a frozen copy of a software that I've run successfully on another machine.
I however have an issue: how do I create a S3 Bucket by the installation of the AMI.
Use case:
A user comes to the store and finds the idea of my service cool and launch an instance.
The instance needs to create a S3 bucket to save the converted files (and maybe the source file as well) and it has to be one bucket per user and not a big bucket for all the files converted via the software.
I have many questions for that:
Is that possible to achieve that (was it designed like this)?
How should I create the bucket, is there a point and click interface at AMI setup or I need to do via AWS SDK?
If I need to it via the SDK, is there a way to access the user credentials (or some random token) so that I can create a bucket successfully?
Am I wrong, should be all the file saved on EBS and made available via an nginx on the AMI (and not using S3 at all)?
Oh and sorry if this question seems silly but I'm very fresh in this AWS cool tech!
Thanks!
You can achieve this using AWS cloudformation.
Steps would be that you first create an AMI and then write a cloudformation script that
Creates an instance from that AMI
Creates the S3 buckets for the
Map the newly created EC2 with the S3. (Since S3 buckets names are unique globally you might not get the name you want) .
However note that the services in cloud are not installed for each customer like the traditional on-premise system.
Here you have a concept of tenants. So every new customer would be a tenant and should be served from the same infrastructure. Basically when a new customer comes in you onboard that as a tenant and possibly create a folder within the already created S3 for this tenant where you store its artifacts. Or if for some justified business reason you just want to have a separate S3 bucket for each of the tenant then even that new S3 bucket should be created during tenant onboarding. Store the mapping of the tenant and the S3 folder/bucket somewhere so that you know which tenant's artifacts to store where
Related
I am writing an application where I have to upload media files to GCS. So, I created a storage bucket and also created a service account which is being used by the application to put and get images from the bucket. To access this service account from the application I had to generate a private key as a JSON file.
I am tested my code and it is working fine. Now, I want to push this code to my Github repository but I don't want this service account key to be in Github.
How do I manage to keep this service account key secret, yet all my fellow colleagues should be able to use it.
I am going to put my application on GCP Container Instance and I want it to work there as well.
As I understand, if your application works from inside the GCP and use some custom service account, you might not need any private keys (as json files) at all.
The custom service account, which is used by your application, should get relevant IAM roles/permissions on the correspondent GCS bucket. And that's all you might need to do.
You can assign those IAM roles/permissions either manually (through UI console), or using CLI commands, or as part of your deployment CI/CD pipeline.
I am using RESTful API, API provider having images on S3 bucket more than 80GB size.
I need to download these images and upload in my AWS S3 bucket, its time taking job.
Is there any way to copy image from API to my S3 bucket instead of I download and upload again.
I talked with API support they saying you are getting image URL, so its up to you how you handle,
I am using laravel.
is it way to get the sourced images url's and directly move images to S3 instead of first I download and upload.
Thanks
I think downloading and re-uploading to different accounts would be inefficient plus pricey for the API Provider. Instead of that I would talk to the respective API Provider and try to replicate the images across accounts.
Post replicate you can Amazon S3 inventory for various information related to the objects in the bucket.
Configuring replication when the source and destination buckets are owned by different accounts
You want "S3 Batch Operations". Search for "xcopy".
You do not say how many images you have, but 1000 at 80GB is 80TB, and for that size you would not even want to be downloading to a temporary EC2 instance in the same region file by file which might be a one or two day option otherwise, you will still pay for ingress/egress.
I am sure AWS will do this in an ad-hoc manner for a price, as they would do if you were migrating from the platform.
It may also be easier to allow access to the original bucket from the alternative account, but this is no the question.
I am trying to explore more on AWS S3 side and got one doubt. Is it possible to export data from EC2 windows instance hosted in one account to S3 bucket hosted in another AWS account? I know one way is using external modules like tntdrive where I can map S3 as mounted drive and export data. Looking for another good solution if S3 provides, so if someone knows this, please place your suggestions.
I think all you would need in your EC2 instance is access to the AWS credentials of the other account, then you could copy your file from your EC2 instance to an S3 bucket for the second account. You may also be able to do it if you grant the identity associated with the EC2 instance rights to an S3 bucket owned by the second account - then you could just write to that bucket "yourself" ...
I've been using AWS Codedeploy using github as the revision source. I have couple of configuration files that contains credentials(e.g. NewRelic and other third party license key) which I do not want to add it to my github repository. But, I need them in the EC2 instances.
What is a standard way of managing these configurations. Or, what tools do you guys use for the same purpose?
First, use IAM roles. That removes 90% of your credentials. Once you've done that, you can store (encrypted!) credentials in an S3 bucket and carefully control access. Here's a good primer from AWS:
https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2
The previous answers are useful for managing AWS roles/credential specifically. However, your question is more about general non-AWS credentials, and how to manage them securely using AWS.
What works well for us is to secure the credentials in a properties file in a S3 bucket. Using same technique as suggested by tedder42 in A safer way to distribute AWS credentials to EC2, you can upload your credentials in a properties file into a highly secured S3 bucket, only available to your instance, which has been configured with the appropriate IAM role.
Then using CodeDeploy, you can add a BeforeInstall lifecycle hook to download the credential files to a local directory via the AWS CLI. For example:
aws s3 cp s3://credentials-example-com/credentials.properties
c:\credentials
Then when the application starts, it can read those credentials from the local file.
Launch your EC2 instances with an instance profile and then give the associated role access to all the things your service needs access to. That's what the CodeDeploy agent is using to make calls, but it's really there for any service you are running to use.
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
I have to upload some static HTML and CSS files to Amazon S3, and have been given an Access Key ID as well as a Secret Access Key.
I've signed up for AWS, how to I upload stuff?
If you are comfortable using the command line, the most versatile (and enabling) approach for interacting with (almost) all things AWS is to use the excellent AWS Command Line Interface (AWS CLI) - it meanwhile covers most services' APIs, and it also features higher level S3 commands that ease dealing with your use case considerably, see the AWS CLI reference for S3 (the lower level commands are in s3api) - specifically you are likely interested in:
cp - Copies a local file or S3 object to another location locally or in S3
sync - Syncs directories and S3 prefixes.
I use the latter to deploy static websites hosted on S3 by simply syncing what's changed, convenient and fast. Your use case is covered by the first of several Examples (more fine grained usage with --exclude, --include and prefix handling etc. is available):
The following sync command syncs objects under a specified prefix and
bucket to files in a local directory by uploading the local files to
s3. [...]
aws s3 sync . s3://mybucket
While the AWS CLI supports the regular AWS Credentials handling via environment variables, you can also configure Multiple Configuration Profiles for yourself and other AWS accounts and switch as needed:
The AWS CLI supports switching between multiple profiles stored within the configuration file. [...] Each profile uses different credentials—perhaps from two different IAM users—and also specifies a different region. The first profile, default, specifies the region us-east-1. The second profile, test-user, specifies us-west-2. Note that, for profiles other than default, you must prefix the profile name with the string, profile.
Assuming you want to upload to S3 storage, there are some good free apps out there. If you google for "CloudBerry Labs" they have a free "S3 Explorer" application which lets you drag and drop your files to your S3 storage. When you first install and launch the app, there will be a place to configure your connection. That's where you'll put in your AccessKey and SecretKey.
Apart from the AWS-CLI, there are a number of 'S3 browsers'. These act very much like FTP clients, showing folder structure and files on the remote store, and allow you to interact much like FTP by uploading and downloading.
This isn't the right forum for recommendations, but if you search the usual places for well received s3 browsers you'll find plenty of options.
To upload a handful of files to S3 (the cloud storage and content distribution system), you can log in to use the AWS console S3 application.
https://console.aws.amazon.com/console/home?#
There's also tonnage of documentation on AWS about the various APIs.