AWS ImportImage operation: S3 bucket does not exist - amazon-web-services

I'm trying to import a windows 2012 OVA file into aws. I'm using this documentation.
AWS VMWare Import
I've created an s3 bucket to store the OVA files, and the OVA files have been uploaded there.
And when I try to import the images files into AWS, I get an error:
aws ec2 import-image --description "server1" --disk-containers file://containers.json --profile=company-dlab_us-east-1
An error occurred (InvalidParameter) when calling the ImportImage operation: S3 bucket does not exist: s3://companyvmimport/
Which is strange because I can list the bucket I'm trying to upload to using the aws command line:
aws s3 ls --profile=company-dlab_us-east-1
2016-10-20 09:52:33 companyvmimport
This is my containers.json file:
[
{
"Description": "server1",
"Format": "ova",
"UserBucket": {
"S3Bucket": "s3://companyvmimport/",
"S3Key": "server1.ova"
}
}]
Where am I going wrong? How can I get this to work?

I think you have an issue in your copy/paste, in your containers.json file you reference bucket as s3://companyvmimport but you have error about kpmgvmimport
anyway you dont need to indicate the s3 protocol in the json
your JSon file should look like
[
{
"Description": "server1",
"Format": "ova",
"UserBucket": {
"S3Bucket": "companyvmimport",
"S3Key": "server1.ova"
}
}]
If the file is not right at the "root" of the bucket you need to indicate full path.

I think the comment in the answer, about setting the custom policy on the S3 bucket, contains JSON that may grant access to everyone (which may not be what's desired).
If you add a Principal statement to the JSON you can limit the access to just yourself.
I ran into this same issue (User does not have access to the S3 object), and after fighting it for a while, with the help of this post and some further research, finally figured it out. I opened a separate post specifically for this "does not have access" issue.

Related

AWS Quicksight: How to Get the latest Data from S3

My Quicksight currently takes everything in the S3 bucket
(S3 sample) https://i.stack.imgur.com/cO8kL.png
But S3 keep changing folder base on the date so 01/,02/,03/ and so on Is there a way to only take the latest data not all of it?
[this my current manifest:]
{
"fileLocations": [
{
"URIPrefixes": [
"https://sample-S3bucket.amazonaws.com/"
]}
],
"globalUploadSettings": {
"format": "JSON"
}
}
There might be a simple solution that I might not know about.
You could set Quicksight to read from another bucket and set a lambda that is triggered when a a new file is uploaded into your existing bucket. This lambda would:
Remove any files from the bucket which QuickSight is reading from
Copy over the latest file into the bucket
Creates a QuickSight SPICE ingestion via api.

Amazon Transcribe and Golang SDK BadRequestException

I uploaded a .flac file to an Amazon S3 bucket but when I try to transcribe the audio using the Amazon Transcribe Golang SDK I get the error below. I tried making the .flac file in the S3 bucket public but still get the same error, so I don't think its a permission issue. Is there anything that prevents the Transcribe service from accessing the file from the S3 bucket that I'm missing? The api user that is uploading and transcribing have full access for the S3 and Transcribe services.
example Go code:
jobInput := transcribe.StartTranscriptionJobInput{
JobExecutionSettings: &transcribe.JobExecutionSettings{
AllowDeferredExecution: aws.Bool(true),
DataAccessRoleArn: aws.String("my-arn"),
},
LanguageCode: aws.String("en-US"),
Media: &transcribe.Media{
MediaFileUri: aws.String("https://s3.us-east-1.amazonaws.com/{MyBucket}/{MyObjectKey}"),
},
Settings: &transcribe.Settings{
MaxAlternatives: aws.Int64(2),
MaxSpeakerLabels: aws.Int64(2),
ShowAlternatives: aws.Bool(true),
ShowSpeakerLabels: aws.Bool(true),
},
TranscriptionJobName: aws.String("jobName"),
}
Amazon Transcribe response:
BadRequestException: The S3 URI that you provided can't be accessed. Make sure that you have read permission and try your request again.
My issue was the audio file being uploaded to s3 was specifying an ACL. I removed that from the s3 upload code and I no longer get the error. Also per the docs, if you have "transcribe" in your s3 bucket name, the transcribe service will have permission to access it. I also made that change but you still need to ensure you aren't using an ACL

How does s3 URL created for codedeploy and how to find with commit ID aws

I have been trying to figure out, how to find s3 URL or s3 object name which is created after the codedeploy deployment with new commit ID.
Here is the aws-cli way to list application revisions and their s3 Location:
aws deploy list-application-revisions --application <your application name>
Example output:
{ "revisionType": "S3",
"s3Location": {
"bucket:" "mybucket",
"key": "mys3objectname",
"bundleType": "zip",
"eTag": "ff1e77d70adaedfd14cecba209811a94"
}
}
To construct an s3 url from this, use:
https://s3-<region>.amazonaws.com/<bucket>/<key>
If you need to find your application name, use:
aws deploy list-applications

AWS CLI s3 copy fails with 403 error, trying to administrate a user-uploaded object

Trying to copy a file from an S3 bucket to my local machine:
aws s3 cp s3://my-bucket-name/audio-0b7ea3d0-13ab-4c7c-ac66-1bec2e572c14.wav ./
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Things I have confirmed:
I'm using version aws-cli/1.11.13 Python/3.5.2 Linux/4.4.0-75-generic botocore/1.4.70
The S3 Object key is correct. I have copied it directly from the S3 web interface.
The AWS CLI is configured with valid credentials. I generated a new key/secret pair. I deleted the ~/.aws folder before re-configuring the aws cli. The IAM web interface online confirms that the user specific by arn is in fact making use of S3 via the CLI.
The IAM user is granted the S3 full access managed policy, per this SO post. I removed all this users' policies, and then added only the AWS managed policy called AdministratorAccess, which includes "S3, Full access, All resources." Is there a different way to grant access via the CLI? I did not believe so.
Bucket policy is intended to grant wide open access:
{
"Sid": "AdminAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
How did I upload this object?
I uploaded this object using AWS Signature v4 signed upload policy from a web app in the client browser directly to AWS.
It turns out, looking at the object properties, I can see the Owner of the OBJECT is "Anonymous," and also "Anonymous" user has full permission to this object.
I believe this is why I'm not able to access this object (I'm authenticated). Example: Since the "Anonymous" user has full permission, I am able to access via GET using a Web browser. This is functioning as designed. The S3 bucket is for uploading files which then become available for public consumption.
So when the file is POST'ed with the upload policy, the resulting owner is "Anonymous".
In this case, acl=bucket-owner-full-control should be used while uploading the object so the bucket owner can control the object.
Doing this, the owner will still be "Anonymous", however, it'll give the bucket owner (me) the full permission and I should be able to access the object after that, via AWS CLI.
Note that acl=ec2-bundle-read is a default that's actually hard-coded into the latest AWS SDK. See https://github.com/aws/aws-sdk-java/blob/7844c64cf248aed889811bf2e871ad6b276a89ca/aws-java-sdk-ec2/src/main/java/com/amazonaws/services/ec2/util/S3UploadPolicy.java#L77
It was necessary to copy S3UploadPolicy.java into my own codebase (it's an entirely portable little utility class, it turns out) and modify it in order to use acl=bucket-owner-full-control. And I have verified that this affords the administration of uploaded objects via AWS CLI.
In my case I have 3 accounts (A1, A2, A3) with 3 canonical users (canonical_user_account_A1, canonical_user_account_A2, canonical_user_account_A3) and 1 IAM role (R1) that is in A3.
Files are in a bucket in A2 and the files owner is canonical_user_account_A1 (this is on purpose). When I tried to list the files I didn't got any error, BUT when I tried to download one of them I got
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
I have added List and Get permissions for a R1 in the bucket policy and in the role permissions, in this case this is not enough, if the account were the bucket is not the owner it can't allow users from other account to get (download) files. So I needed to make sure that when I upload files I'm using:
access_control_policy = {
'Grants': [
{
'Grantee': {
'ID': canonical_user_account_A2,
'Type': 'CanonicalUser'
},
'Permission': 'READ'
},
{
'Grantee': {
'ID': canonical_user_account_A3,
'Type': 'CanonicalUser'
},
'Permission': 'READ'
},
],
'Owner': {
'ID': canonical_user_account_A1
}
}
upload_extra_args = {'ACL': 'bucket-owner-full-control'}
s3_client.upload_file(file_path, bucket_name, s3_file_path, ExtraArgs=upload_extra_args)
s3_client.put_object_acl(AccessControlPolicy=access_control_policy, Bucket=bucket_name, Key=s3_file_path)
This allow both canonical_user_account_A2 and canonical_user_account_A3 to read and download the file.
I ran into a similar permissions issue when trying to download from s3 something I had uploaded previously. Turns out it has nothing to do with the bucket policy and everything to do with how your credentials are set when you upload and how you grant access privileges at time of upload. See this for more information on several ways to solve the problem.
In my case above error appeared when machine that was trying to contact S3 had system time far from the current one. Setting a correct time helped.
AWS S3 will return you Forbidden(403) even if file does not exist for security reasons.
Please ensure you have given proper s3 path while downloading.
You can read more about it here

Cannot authenticate to Docker in Elastic Beanstalk through S3

http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html#docker-singlecontainer-dockerrun-privaterepo
Following the instructions here to connect to a private docker hub container from Elastic Beanstalk, but it stubbornly refuses to work. It seems like when calling docker login in Docker 1.12 the resulting file has no email property, but it sounds like aws expects it so I create a file called dockercfg.json that looks like this:
{
"https://index.docker.io/v1/": {
"auth": "Y2...Fz",
"email": "c...n#gmail.com"
}
}
The relevant piece of my Dockerrun.aws.json file looks like this:
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
And I have the file uploaded at the root of the S3 bucket. Why do I still get errors that say Error: image c...6/w...t:23 not found. Check snapshot logs for details. I am sure the names are right and that this would work if it was a public repository. The full error is below. I am deploying from GitHub with Circle CI if it makes a difference, happy to provide any other information needed.
INFO: Deploying new version to instance(s).
WARN: Failed to pull Docker image c...6/w...t:23, retrying...
ERROR: Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
ERROR: [Instance: i-06b66f5121d8d23c3] Command failed on instance. Return code: 1 Output: (TRUNCATED)...b-project
Error: image c...6/w...t:23 not found
Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-06b66f5121d8d23c3'. Aborting the operation.
ERROR: Failed to deploy application.
ERROR: Failed to deploy application.
EDIT: Here's the full Dockerrun file. Note that %BUILD_NUM% is just an int, I can verify that works.
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
"Image": {
"Name": "c...6/w...t:%BUILD_NUM%",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
EDIT: Also, I have verified that this works if I make this Docker Hub container public.
OK, let's do this;
Looking at the same doc page,
With Docker version 1.6.2 and earlier, the docker login command creates the authentication file in ~/.dockercfg in the following format:
{
"server" :
{
"auth" : "auth_token",
"email" : "email"
}
}
You already got this part correct I see. Please double check the cases below one by one;
1) Are you hosting the S3 bucket in the same region?
The Amazon S3 bucket must be hosted in the same region as the
environment that is using it. Elastic Beanstalk cannot download files
from an Amazon S3 bucket hosted in other regions.
2) Have you checked the required permissions?
Grant permissions for the s3:GetObject operation to the IAM role in
the instance profile. For details, see Managing Elastic Beanstalk
Instance Profiles.
3) Have you got your S3 bucket info in your config file? (I think you got this too)
Include the Amazon S3 bucket information in the Authentication (v1) or
authentication (v2) parameter in your Dockerrun.aws.json file.
Can't see your permissions or your env region, so please double check those.
If that does not work, i'd upgrade to Docker 1.7+ if possible and use the corresponding ~/.docker/config.json style.
Depending on your Docker version, this file is saved as either ~/.dockercfg or *~/.docker/config.json
cat ~/.docker/config.json
Output:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
Important:
Newer versions of Docker create a configuration file as shown above with an outer auths object. The Amazon ECS agent only supports dockercfg authentication data that is in the below format, without the auths object. If you have the jq utility installed, you can extract this data with the following command:
cat ~/.docker/config.json | jq .auths
Output:
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
Create a file called my-dockercfg using the above content.
Upload the file into the S3 bucket with the specified key(my-dockercfg) in the Dockerrun.aws.json file.
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "elasticbeanstalk-us-west-2-618148269374",
"key": "my-dockercfg"
}
}