aws ec2 modern.ie image upload - amazon-web-services

Has anyone been successful to upload a modern.ie vdmk image to aws ec2?
I've tried via the ec2 import instance command:
ec2-import-instance IE10.Win7.For.Windows.VMWare\IE10_-_Win7-disk1.vmdk -f vmdk -t t2.small -a i386 -b xxxx --subnet subnet-xxxxx -p Windows -o %AWS_ACCESS_KEY% -w %AWS_SECRET_KEY% ...
but once i described the import, i got: ClientError: Unsupported Windows OS
After some reading I attempted to create an AMI via the aws cli interface after loading the file to s3 creating the policies etc:
aws ec2 import-image --cli-input-json "{ \"Description\": \"ModernIE Win7IE10\", \"DiskContainers\": [ { \"Description\": \"First CLI task\",
\"UserBucket\": { \"S3Bucket\": \"xxx_temp\", \"S3Key\" : \"IE10_-_Win7-disk1.vmdk\" } } ], \"LicenseType\": \"BYOL\", \"Architecture\": \"i386\", \"Platform\": \"Windows\"}"
But describing the import i get : "StatusMessage": "ClientError: Disk validation failed [Invalid S3 source location]"
I've even made the bucket url public!
Anyone have any ideas?
Thanks!

Use the AWS CLI to test that error:
aws s3 ls s3://xxx_temp
If you do not see the IE10_-_Win7-disk1.vmdk listed there, then the S3 upload is your problem. Re-verify your S3 key.
Also check the bucket policy and make sure the configured IAM user for your CLI has access to that bucket.

If you're seeing the Unsupported Windows OS I would check the Prerequisites very carefully.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites-ImportInstance.html
Not all operating systems can be imported. I frequently have an issue importing a linux VM where I've upgraded the kernel version and it becomes "Unsupported". The importer is very picky.
During the import process you can use the identifier returned from the import command to follow its status like so:
aws ec2 describe-import-image-tasks --cli-input-json "{"ImportTaskIds":["$IMPORT_ID"]}"
I have been most successful converting the VM to an OVA first, uploading THAT to S3 and running the import command against that.
If you are using VirtualBox you can do that from the command line:
vboxmanage export ${VM_NAME} -o MyExportedVM.ova;

Related

gsutil rsync with s3 buckets gives InvalidAccessKeyId error

I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.

Connection to sts.amazonaws.com timed out when calling Python boto3 API from EC2 instance

I am trying to setup some build and deployment servers based on EC2 instances to deploy software to AWS via CloudFormation.
The current setup uses the AWS CLI to deploy CloudFormation templates, and authentication is handled using a credentials profile where the ~/.aws/config file has a profile with:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
The setup using the AWS CLI appears to be working fine, and can deploy CloudFormation templates, upload files to S3 etc.
I wanted to automate this further and use a configuration-based approach to allow for more flexibility in our deployments. To achieve this, I have written some Python code to parse a config file and use the Boto3 library (which the AWS CLI also uses) to replicate the functionality. However when I am trying to do similar things in Boto3 (like deploy CloudFormation and upload files to S3), I get the following error: Connection to sts.amazonaws.com timed out. Unfortunately I can't provide the full stack trace since it's on a separate network. I am running Python 3.7 and boto3-1.21-13, botocore-1.24.13.
I assume it might be because I need to setup a VPC endpoint for STS? However, I can't work out why and how the AWS CLI works fine, but Boto3 doesn't. Especially since AWS CLI uses Boto3 under the hood.
In addition, I have confirmed that I can retrieve instance metadata using curl from the EC2 instances.
To reproduce the error, this command fails for me:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
However this AWS cli command works:
aws --profile x s3 ls bucket
I guess I don't understand why the AWS CLI command works, when the boto3 command fails. Why does boto3 needs to call the sts.amazonaws.com endpoint, when the AWS CLI seemingly doesn't? What am I missing?
The aws cli and boto3 both use botocore, which is only a minor detail. Nevertheless, both the cli and boto3, when run in the same environment with the same access to the credentials, should indeed be able to reach the same endpoint.
This:
aws sts get-caller-identity --profile x
and:
python -c "import boto3;print(boto3.Session(profile_name='x').client('sts').get_caller_identity())"
are equivalent and should make the same api calls to the same endpoint.
As an aside, I find it is often best not to have your code concerned with session handling at all. It seems most simple to me for the code to expect the environment to handle that. So just export AWS_PROFILE and run the code. This prevents other user of the script from having to have the same profile and name it the same.
Yeah so it turns out I just needed to set/export AWS_STS_REGIONAL_ENDPOINTS='regional'.
After many hours of trawling the botocore and awscli source and logs, I found out that botocore sets it by default to 'legacy'.
Where as in v2 of the AWS CLI, they set it to 'regional'.

AWS Cloudformation Windows 2016 EC2 S3 silent install

I have architecture created using CloudFormation utilizing Windows 2016 EC2 server and S3, written in JSON. I have 7 executables uploaded onto my S3 bucket. I can manually silently install everything from a Powershell for AWS prompt, once I Remote into the EC2. I can do it one at a time, and even have it in a .ps1 file and run it in Powershell for AWS and it runs correctly.
I am now trying to get this to install silently when the EC2 instance is created. I just can't do it and I can't understand why. The JSON code looks correct. As you can see, I first download everything from the S3 bucket, switch to the c:\TEMP directory where they were all downloaded, then run the executables in unattended install mode. I don't get any errors in my CloudFormation template. It runs "successfully." The problem is that nothing happens. Is it a permissions thing? Any help is welcome and appreciated. Thanks!
Under the AWS::EC2::Instance section I have the UserData section looking something like this (I shortened the executable names below):
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"<powershell>\n",
"copy-S3Object -BucketName mySilentInstallBucket -KeyPrefix * -LocalFolder c:\\TEMP\\",
"\n",
"cd c:\\TEMP\\",
"\n",
"firefox.exe -S ",
"\n",
"notepadpp.exe /S",
"\n",
"Git.exe /SILENT",
"\n",
"</powershell>"
]]}}
This troubleshooting doc will cover the various reasons you may not be able to connect to S3: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/
To connect to your S3 buckets from your EC2 instances, you need to do
the following:
Create an AWS Identity and Access Management (IAM) profile role that grants access to Amazon S3.
Attach the IAM instance profile to the instance.
Validate permissions on your S3 bucket.
Validate network connectivity from the EC2 instance to Amazon S3.
Validate access to S3 buckets.
The CloudFormation template won't fail based on UserData execution exceptions.

How can I transfer a remote file to my S3 bucket using AWS CLI?

I tried to follow advice provided at https://stackoverflow.com/a/18136205/6608952 but was unsure how to share myAmazonKeypair path in a .pem file on the remote server.
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
The command completed after a few minutes with this display:
ssh: connect to host
myBucketEndpointName
port 22: Connection timed out
lost connection
I have a couple of very large files to transfer and would prefer not to have to download the files to my local computer and then re-upload them to the S3 bucket.
Any suggestions?
There is no direct way to upload files to S3 from a remote location. i.e a URL
So to achieve that, you have two options :
Download the file on your local machine and then upload it via AWS Console or AWS CLI.
Download the file in AWS EC2 Instance and upload to S3 by AWS CLI.
The first method is pretty simple, not much explanation needed.
But for the second method, you'll need to do :
Create an EC2 Instance in the same region as the S3 Bucket is. Or if you already have an Instance, then login/ssh to it.
Download the file from the source to the EC2 Instance. via wget or curl whichever is comfortable.
Install AWS CLI on the EC2 Instance.
Create IAM User and Grant him Permission for your S3 Bucket.
Configure your AWS CLI with your IAM Credentials.
Upload your file to S3 Bucket with AWS CLI S3 CP Utility.
Terminate the Instance, if you set up the instance only for this.
Do it with shell script easily. If you have a list of URLs in files.txt do it like it is described here:
#!/bin/bash
input="files.txt"
while IFS= read -r line do
name=$(basename "$line")
echo $name
wget $line
aws s3 mv $name <YOUR_S3_URI>
done < "$input"
Or for one file:
wget <FILE_URL> | aws s3 mv <FILE_NAME> <YOUR_S3_URI>

Configuring aws cli use fakes3

Keen to setup fake s3, have it working via docker setup. Running on port 4569. I cannot figure out how to test using aws cli (version 1.10.6). specifically change the port for the access.
i.e. want to do a command like
$ aws s3 cp test.txt s3://mybucket/test2.txt
i need to specify the port, i've tried
--port settings on command line: i.e. AWS_ACCESS_KEY_ID=ignored AWS_SECRET_ACCESS_KEY=ignored aws s3 --profile fakes3 cp test.txt s3://mybucket/test2.txt (says not valid parameter)
adding a profile and including end_point="localhost:4569 in config in ~/.aws`. gives error about AUTH Key
running fakes3 on 443 but that then clashes with my local machine
Has anyone got aws cli working with fakes3?
$ aws s3 --version
aws-cli/1.10.6 Python/2.7.11 Darwin/15.2.0 botocore/1.3.28
Use the --endpoint-url argument. If fakes3 is listening on port 4569, try this:
aws --endpoint-url=http://localhost:4569 s3 ls