AWS: error while creating AMI from Ubuntu - error initramfs/initrd - amazon-web-services

I've created a simple VM in VirtualBox and installed Ubuntu, however, I am unable to import this to AWS and generate an AMI from it.
Operating system: Ubuntu 20.04.4 LTS
Kernel: Linux 5.4.0-104-generic
I've followed the steps provided according to the docs and setup role-policy.json & trust-policy.json:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
I keep running into the error:
{
"ImportImageTasks": [
{
"Description": "My server VM",
"ImportTaskId": "import-ami-xxx",
"SnapshotDetails": [
{
"DeviceName": "/dev/sde",
"DiskImageSize": 2362320896.0,
"Format": "VMDK",
"Status": "completed",
"Url": "s3://xxxx/simple-vm.ova",
"UserBucket": {
"S3Bucket": "xxx",
"S3Key": "simple-vm.ova"
}
}
],
"Status": "deleted",
"StatusMessage": "ClientError: We were unable to read your import's initramfs/initrd to determine what drivers your import requires to run in EC2.",
"Tags": []
}
]
}
I've tried changing disk to and from .vdi and .vmdk
I've tried disabling floppy drive and update initramfs

I ran into this error and was able to get around it by using import-snapshot instead of import-image. Then I could deploy the snapshot using the ordinary means of creating an image from the snapshot.

Related

Why can't I change "spark.driver.memory" value in AWS Elastic Map Reduce?

I want to tune my spark cluster on AWS EMR and I couldn't change the default value of spark.driver.memory which leads every spark application to crash as my dataset is big.
I tried editing the spark-defaults.conf file manually on the master machine, and I also tried configuring it directly with a JSON file on EMR dashboard while creating the cluster.
Here's the JSON file used:
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.memory": "7g",
"spark.driver.cores": "5",
"spark.executor.memory": "7g",
"spark.executor.cores": "5",
"spark.executor.instances": "11"
}
}
]
After using the JSON file, the configurations are correctly found in the "spark-defaults.conf" but on spark dashboard there's always the default value for "spark.driver.memory" of 1000M while the other values are modified correctly. Anyone have got into the same problem please?
Thank you in advance.
You need to set
maximizeResourceAllocation=true
in the spark-defaults settings
[
{
"Classification": "spark",
"Properties": {
"maximizeResourceAllocation": "true"
}
}
]

Installing authorized_keys file under custom user for Ubuntu AWS

I'm trying to setup an ubuntu server and login with a non-default user. I've used cloud-config with the user data to setup an initial user, and packer to provision the server:
system_info:
default_user:
name: my_user
shell: /bin/bash
home: /home/my_user
sudo: ['ALL=(ALL) NOPASSWD:ALL']
Packer logs in and provisions the server as my_user, but when I launch an instance from the AMI, AWS installs the authorized_keys files under /home/ubuntu/.ssh/
Packer config:
{
"variables": {
"aws_profile": ""
},
"builders": [{
"type": "amazon-ebs",
"profile": "{{user `aws_profile`}}",
"region": "eu-west-1",
"instance_type": "c5.large",
"source_ami_filter": {
"most_recent": true,
"owners": ["099720109477"],
"filters": {
"name": "*ubuntu-xenial-16.04-amd64-server-*",
"virtualization-type": "hvm",
"root-device-type": "ebs"
}
},
"ami_name": "my_ami_{{timestamp}}",
"ssh_username": "my_user",
"user_data_file": "cloud-config"
}],
"provisioners": [{
"type": "shell",
"pause_before": "10s",
"inline": [
"echo 'run some commands'"
]}
]
}
Once the server has launched, both ubuntu and my_user users exist in /etc/passwd:
my_user:1000:1002:Ubuntu:/home/my_user:/bin/bash
ubuntu:x:1001:1003:Ubuntu:/home/ubuntu:/bin/bash
At what point does the ubuntu user get created, and is there a way to install the authorized_keys file under /home/my_user/.ssh at launch instead of ubuntu?
To persist the default user when using the AMI to launch new EC2 instances from it you have to change the value is /etc/cloud/cloud.cfg and update this part:
system_info:
default_user:
# Update this!
name: ubuntu
You can add your public keys when you create the user using cloud-init. Here is how you do it.
users:
- name: <username>
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
Addding additional SSH user account with cloud-init

aws ec2 import-image error "ClientError: GRUB doesn't exist in /etc/default

I am following the instructions from http://docs.aws.amazon.com/vm-import/latest/userguide/import-vm-image.html to import an OVA. Here are the summarized steps I followed.
Step 1: Upload an OVA to S3 bucket.
Step 2: Create trust policy
Step 3: Create role policy
Step 4: Create containers.json with bucket name and ova filename.
Step 5: Run command for import-image
Command: aws ec2 import-image --description "My Unique OVA" --disk-containers file://containers.json
Step 6: Get the "ImportTaskId": "import-ami-fgi2cyyd" (in my case)
Step 7: Check status of import task
Error:
C:\Users\joe>aws ec2 describe-import-image-tasks --import-task-ids import-ami-fgi2cyyd
{
"ImportImageTasks": [
{
"Status": "deleted",
"SnapshotDetails": [
{
"UserBucket": {
"S3Bucket": "my_unique_bucket",
"S3Key": "my_unique_ova.ova"
},
"DiskImageSize": 2871726592.0,
"Format": "VMDK"
}
],
"Description": "My Unique OVA",
"StatusMessage": "ClientError: GRUB doesn't exist in /etc/default directory.",
"ImportTaskId": "import-ami-fgi2cyyd"
}
]
}
What am I doing wrong? I am on free-tier trying things out.
Contents of containers.json:
[
{
"Description": "My Unique OVA",
"Format": "ova",
"UserBucket": {
"S3Bucket": "my_unique_bucket",
"S3Key": "my_unique_ova.ova"
}
}]
The ova file was corrupted in my case. Tried it with a smaller ova and it worked fine.
Alright, figured it out. The problem I ran into, which I assume will be the case with yours as well is that you probably aren't using a grub loader but rather the lilo loader. I was able to alter the boot loader by going into the gui (startx) and going under system configuration. Under the Boot menu I was able to switch from lilo to Grub. Once I did that, I got further in the ec2 vm import process. Hope that helps.

Map 'ec2-register snapshot' syntax onto 'register-image AMI 'syntax using awscli

What is the correct syntax for mapping a snapshot onto an AMI using awscli?
More explicitly, how do I map the old syntax
'ec2-register -s snap-9abc1234 --kernel 99abcdef' onto the new syntax
'aws ec2 --register-image' ?
It's the following:
aws ec2 register-image --kernel-id <your-kernel> --root-device-name /dev/sda1 --block-device-mappings [list in JSON shown below]
[
{
"VirtualName": "string",
"DeviceName": "string",
"Ebs": {
"SnapshotId": "string",
"VolumeSize": integer,
"DeleteOnTermination": true|false,
"VolumeType": "standard"|"io1",
"Iops": integer
},
"NoDevice": "string"
}
...
]
You can run aws ec2 register-image help for help on the command.
Make sure you are using the awscli python package on not the aws package as that one is different (not the official one)
Here's a link to the github repo:
https://github.com/aws/aws-cli

Aws OpsWorks RDS Configuration for Tomcat context.xml

I am trying to deploy an app named abcd with artifact as abcd.war. I want to configure to an external datasource. Below is my abcd.war/META-INF/context.xml file
<Context>
<ResourceLink global="jdbc/abcdDataSource1" name="jdbc/abcdDataSource1" type="javax.sql.DataSource"/>
<ResourceLink global="jdbc/abcdDataSource2" name="jdbc/abcdDataSource2" type="javax.sql.DataSource"/>
</Context>
I configured the below custom JSON during a deployment
{
"datasources": {
"fa": "jdbc/abcdDataSource1",
"fa": "jdbc/abcdDataSource2"
},
"deploy": {
"fa": {
"database": {
"username": "un",
"password": "pass",
"database": "ds1",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
},
"database": {
"username": "un",
"password": "pass",
"database": "ds2",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
}
}
}
}
I also added the recipe opsworks_java::context during configure phase. But it doesnt seem like working and I always get the message as below
[2014-01-11T16:12:48+00:00] INFO: Processing template[context file for abcd] action create (opsworks_java::context line 16)
[2014-01-11T16:12:48+00:00] DEBUG: Skipping template[context file for abcd] due to only_if ruby block
Can anyone please help on what I am missing with OpsWorks configuration?
You can only configure one datasource using the built-in database.yml. If you want to pass additional information to your environment, please see Passing Data to Applications