AWS VM-Import of hardened Image - amazon-web-services

We successfully converted the VMDK file to AWS AMI image and ran in AWS EC2 service.
VMDK file is a hardened image, only a few pre-defined credentials (username & password) were allowed to log in.
Wondering, how AWS Image Import Service could push the SSH-KEY (Public key) to newly created EC2 instance, which AWS have no clue on Username and password. And No AWS-related agents/services like AWS SDK or System Managers are in the VM.
OS is RHEL 7
Summarising the question as,
On EC2 instance creation, On KEY_PAIR attachment for SSH login, how does this public key file get pushed to that VM?

Related

How to find the created date of the AWS Lightsail Instance?

I have like 15 Lightsail instances created on my AWS account and now I wanted to know when these Lightsail instances were created.
The creation date of the Lightsail instances on which date and time these were created. But is not able to find this information from the AWS Lightsail console.
You can use the AWS Command Line Interface (AWS CLI) to retrieve the creation date of your Lightsail instances. The following AWS CLI command will list all of your Lightsail instances and the creation date for each instance:
aws lightsail get-instances --query "instances[*].{Name:name, CreationDate:createdAt}"
This command uses the get-instances command to retrieve information about all of your Lightsail instances, and the --query option to extract the name and creation date of each instance.
Alternatively, you can also retrieve the creation date of a specific Lightsail instance by using the get-instance command and specifying the instance name:
aws lightsail get-instance --instance-name <instance_name> --query "instance.createdAt"
Replace <instance_name> with the name of the Lightsail instance you want to retrieve information for.
Note: The AWS CLI must be installed and configured on your local machine in order to use these commands.
If you haven't already done so, just follow these steps:
1- Install the AWS CLI:
You can install the AWS CLI on your local machine by following the installation instructions for your operating system. You can find the installation instructions at the following URL: https://aws.amazon.com/cli/
2- Configure the AWS CLI:
After installing the AWS CLI, you need to configure it with your AWS credentials. You can do this by running the following command:
aws configure
This will prompt you for your AWS access key ID, secret access key, default region name, and default output format. You can find your AWS access keys in the AWS Management Console.
3- Verify the configuration:
To verify that your AWS CLI is configured correctly, you can run the following command:
aws lightsail get-instances
This command should list all of your Lightsail instances in your AWS account.
With the AWS CLI installed and configured, you can now use the AWS CLI commands to retrieve information about your Lightsail instances, including the creation date.
I hope my answer helps you

How to scp to ec2 instance via ssm agent using boto3 and send file

Hi need to transfer a file to ec2 machine via ssm agent. I have successfully installed ssm-agent in ec2 instances and from UI i am able to start session via "session-manager" and login to the shell of that ec2 machine.
Now I tried to automate it via boto3 and using the below code,
ssm_client = boto3.client('ssm', 'us-west-2')
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': ['echo "hello world" >> /tmp/test.txt']},
InstanceIds=['i-xxxxx'],
)
The above works fine and i am able to send create a file called test.txt in remote machine but his is via echo command
Instead I need to send a file from my local machine to this remove ec2 machine via ssm agent, hence I did the following ,
Modified the "/etc/ssh/ssh_config" with proxy as below,
# SSH over Session Manager
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
Then In above code, I have tried to start a session with below code and that is also successfully .
response = ssm_client.start_session(Target='i-04843lr540028e96a')
Now I am not sure how to use this session response or use this aws ssm session and send a file
Environment description:
Source: pod running in an EKS cluster
dest: ec2 machine (which has ssm agent running)
file to be transferred: Important private key which will be used by some process in ec2 machine and it will be different for different machine's
Solution tried:
I can push the file to s3 in source and execute ssm boto3 libaray can pull from s3 and store in the remote ec2 machine
But I don't want to do the above due to the reason I don't want to store the private key i s3. So wanted to directly send the file from memory to the remote ec2 machine
Basically i wanted to achieve scp which is mentioned in this aws document : https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-ssh
If you have SSH over SSM setup, you can just use normal scp, like so:
scp file.txt ec2-user#i-04843lr540028e96a
If it isn't working, make sure you have:
Session Manager plugin installed locally
Your key pair on the instance and locally (you will need to define it in your ssh config, or via the -i switch)
SSM agent on the instance (installed by default on Amazon Linux 2)
An instance role attached to the instance that allows Session Manager (it needs to be there at boot, so if you just attached, reboot)
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
If you need more detail, give me more info on your setup, and I'll try and help.

How do I pull Docker images from the Tutum private registry with Amazon ECS?

I am trying to set up an Amazon ECS deployment which employs an image from the Tutum private Docker registry. Tutum being private, it requires authenticating obviously.
As per the ECS documentation, I've modified the file '/etc/ecs/ecs.config' on the EC2 instance to contain the correct authentication credentials for Tutum:
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"tutum.co":{"auth":"<auth-string>","email":"<my-email>"}}
The auth string is a Base64 encoding of my Tutum credentials: '<username>:<password>'.
However, when I try to run the corresponding ECS task, it fails with this message: CannotPullContainerError: Authentication is required.
How do I properly configure ECS to authenticate against the Tutum registry, so I can successfully pull images from there?
Seems as if what it took was to reboot the EC2 instance, so that the settings in '/etc/ecs/ecs.config' were applied.

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html

Error SSHing to Elastic MapReduce JobFlow on AWS

When following the tutorial instructions for connecting to my JobFlow in EMR, I type following:
./elastic-mapreduce --jobflow j-3FLVMX9CYE5L6 --ssh
and get this error:
Permission denied (publickey)
I'm already able to run other elastic-mapreduce commands just fine to create flows etc, so I'm assuming there's security settings required on the actual master instance for the flow, but nothing in the tutorial explains how to configure this (after all, I need to SSH into it to do the configuration in the first place!)
I found that I need to login as user "hadoop" using the EC2 keypair, and not any of the regular suspects (ec2-user, root, etc.) Like:
ssh -i privatekey.pem hadoop#masternode
Hope this is useful to someone.
Ok now I feel sheepish: I was using the Amazon CloudFront keypair from the my initial account setup rather than keypair associated with my account for accessing EC2 instances, accessible from EC2 > Network & Security > Key Pairs in the AWS Management Console.
The command "ssh -i privatekey.pem hadoop#masternode" worked great. The user "hadoop" must be used for "ec2 elastic mapreduce".