Deploy image to AWS Elastic Beanstalk from private Docker repo - amazon-web-services

I'm trying to pull Docker image from its private repo and deploy it on AWS Elastic Beanstalk with the help of Dockerrun.aws.json packed in zip. Its content is
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "docker/.dockercfg"
},
"Image": {
"Name": "namespace/repo:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Where "my-bucket" is my bucket's name on s3, which uses the same location as my BS environment. Configuration that's set in key is the result of
$ docker login
invoked in docker2boot app's terminal. Then it's copied to folder "docker" in "my-bucket". The image exists for sure.
After that I upload .zip with dockerrun file to EB and on deploy I get
Activity execution failed, because: WARNING: Invalid auth configuration file
What am I missing?
Thanks in advance

Docker has updated the configuration file path from ~/.dockercfg to ~/.docker/config.json. They also have leveraged this opportunity to do a breaking change to the configuration file format.
AWS however still expects the former format, the one used in ~/.dockercfg (see the file name in their documentation):
{
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
Which is incompatible with the new format used in ~/.docker/config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
}
They are pretty similar though. So if your version of Docker generates the new format, just strip the auths line and its corresponding curly brace and you are good to go.

Related

Hunspell Dictionary Config for AWS Elasticsearch

I am trying to install Hunspell Stemming Dictionaries for AWS ElasticSearch v7.10
I have done this previously for a classic unix install of ElasticSearch, which involved unzipping the latest .oxt dictionary file
https://extensions.libreoffice.org/en/extensions/show/english-dictionaries
https://extensions.libreoffice.org/assets/downloads/41/1669872021/dict-en-20221201_lo.oxt
Copying these files to the expected filesystem path:
./config/hunspell/{lang}/{lang}.aff + {lang}.dic
The difference is that AWS ElasticSearch doesn't have backend filesystem. I have assumed we are supposed use S3 instead. I have created a bucket with this file layout and think I have successfully given it public read-only permissions.
s3://hunspell/
http://hunspell.s3-website.eu-west-2.amazonaws.com/
My ElasticSearch schema contains the following analyser
{
"settings": {
"analysis": {
"analyzer": {
//***** Stemmers *****//
// DOCS: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-hunspell-tokenfilter.html
"hunspell_stemmer_en_GB": {
"type": "hunspell",
"locale": "en_GB",
"dedup": true,
"ignore_case": true,
"dictionary": [
"s3://hunspell/en_GB/en_GB.aff",
"s3://hunspell/en_GB/en_GB.dic",
]
}
}
}
}
But mapping PUT command is still returning the following exception
"type": "illegal_state_exception",
"reason": "failed to load hunspell dictionary for locale: en_GB",
"caused_by": {
"type": "exception",
"reason": "Could not find hunspell dictionary [en_GB]"
}
How do I configure Hunspell for AWS ElasticSearch?

How do you properly format the syntax in an AWS System Manager Document using downloadContent sourceInfo StringMap

My goal is to have an AWS System Manager Document download a script from S3 and then run that script on the selected EC2 instance. In this case, it will be a Linux OS.
According to AWS documentation for aws:downloadContent the sourceInfo Input is of type StringMap.
The example code looks like this:
{
"schemaVersion": "2.2",
"description": "aws:downloadContent",
"parameters": {
"sourceType": {
"description": "(Required) The download source.",
"type": "String"
},
"sourceInfo": {
"description": "(Required) The information required to retrieve the content from the required source.",
"type": "StringMap"
}
},
"mainSteps": [
{
"action": "aws:downloadContent",
"name": "downloadContent",
"inputs": {
"sourceType":"{{ sourceType }}",
"sourceInfo":"{{ sourceInfo }}"
}
}
]
}
This code assumes you will run this document by hand (console or CLI) and then enter the sourceInfo in the parameter. When running this document by hand, anything entered in the parameter (an S3 URL) isn't accepted. However, I'm not trying to run this by hand, but rather programmatically and I want to hard code the S3 URL into sourceInfo in mainSteps.
AWS does give an example of syntax that looks like this:
{
"path": "https://s3.amazonaws.com/aws-executecommand-test/powershell/helloPowershell.ps1"
}
I've coded the document action in mainSteps like this:
{
"action": "aws:downloadContent",
"name": "downloadContent",
"inputs": {
"sourceType": "S3",
"sourceInfo":
{
"path": "https://s3.amazonaws.com/bucketname/folder1/folder2/script.sh"
},
"destinationPath": "/tmp"
}
},
However, it doesn't seem to work and I receive this error:
invalid format in plugin properties map[sourceInfo:map[path:https://s3.amazonaws.com/bucketname/folder1/folder2/script.sh] sourceType:S3];
error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string
Note: I have seen this post that references how to format it for Windows. I did try it, didn't work and doesn't seem relevant to my Linux needs.
So my questions are:
Do you need a parameter for sourceInfo of type StringMap - something that won't be used within the aws:downloadContent {{ sourceInfo }} mainSteps?
How do you properly format the aws:downloadContent action sourceInfo StringMap in mainSteps?
Thank you for your effort in advance.
I had similar issue as I did not want anyone to type the stuff when running. So I added a default to the download content
"sourceInfo": {
"description": "(Required) Blah.",
"type": "StringMap",
"displayType": "textarea",
"default": {
"path": "https://mybucket-public.s3-us-west-2.amazonaws.com/automation.sh"
}
}

Why can't I change "spark.driver.memory" value in AWS Elastic Map Reduce?

I want to tune my spark cluster on AWS EMR and I couldn't change the default value of spark.driver.memory which leads every spark application to crash as my dataset is big.
I tried editing the spark-defaults.conf file manually on the master machine, and I also tried configuring it directly with a JSON file on EMR dashboard while creating the cluster.
Here's the JSON file used:
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.memory": "7g",
"spark.driver.cores": "5",
"spark.executor.memory": "7g",
"spark.executor.cores": "5",
"spark.executor.instances": "11"
}
}
]
After using the JSON file, the configurations are correctly found in the "spark-defaults.conf" but on spark dashboard there's always the default value for "spark.driver.memory" of 1000M while the other values are modified correctly. Anyone have got into the same problem please?
Thank you in advance.
You need to set
maximizeResourceAllocation=true
in the spark-defaults settings
[
{
"Classification": "spark",
"Properties": {
"maximizeResourceAllocation": "true"
}
}
]

aws ec2 import-image error "ClientError: GRUB doesn't exist in /etc/default

I am following the instructions from http://docs.aws.amazon.com/vm-import/latest/userguide/import-vm-image.html to import an OVA. Here are the summarized steps I followed.
Step 1: Upload an OVA to S3 bucket.
Step 2: Create trust policy
Step 3: Create role policy
Step 4: Create containers.json with bucket name and ova filename.
Step 5: Run command for import-image
Command: aws ec2 import-image --description "My Unique OVA" --disk-containers file://containers.json
Step 6: Get the "ImportTaskId": "import-ami-fgi2cyyd" (in my case)
Step 7: Check status of import task
Error:
C:\Users\joe>aws ec2 describe-import-image-tasks --import-task-ids import-ami-fgi2cyyd
{
"ImportImageTasks": [
{
"Status": "deleted",
"SnapshotDetails": [
{
"UserBucket": {
"S3Bucket": "my_unique_bucket",
"S3Key": "my_unique_ova.ova"
},
"DiskImageSize": 2871726592.0,
"Format": "VMDK"
}
],
"Description": "My Unique OVA",
"StatusMessage": "ClientError: GRUB doesn't exist in /etc/default directory.",
"ImportTaskId": "import-ami-fgi2cyyd"
}
]
}
What am I doing wrong? I am on free-tier trying things out.
Contents of containers.json:
[
{
"Description": "My Unique OVA",
"Format": "ova",
"UserBucket": {
"S3Bucket": "my_unique_bucket",
"S3Key": "my_unique_ova.ova"
}
}]
The ova file was corrupted in my case. Tried it with a smaller ova and it worked fine.
Alright, figured it out. The problem I ran into, which I assume will be the case with yours as well is that you probably aren't using a grub loader but rather the lilo loader. I was able to alter the boot loader by going into the gui (startx) and going under system configuration. Under the Boot menu I was able to switch from lilo to Grub. Once I did that, I got further in the ec2 vm import process. Hope that helps.

Aws OpsWorks RDS Configuration for Tomcat context.xml

I am trying to deploy an app named abcd with artifact as abcd.war. I want to configure to an external datasource. Below is my abcd.war/META-INF/context.xml file
<Context>
<ResourceLink global="jdbc/abcdDataSource1" name="jdbc/abcdDataSource1" type="javax.sql.DataSource"/>
<ResourceLink global="jdbc/abcdDataSource2" name="jdbc/abcdDataSource2" type="javax.sql.DataSource"/>
</Context>
I configured the below custom JSON during a deployment
{
"datasources": {
"fa": "jdbc/abcdDataSource1",
"fa": "jdbc/abcdDataSource2"
},
"deploy": {
"fa": {
"database": {
"username": "un",
"password": "pass",
"database": "ds1",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
},
"database": {
"username": "un",
"password": "pass",
"database": "ds2",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
}
}
}
}
I also added the recipe opsworks_java::context during configure phase. But it doesnt seem like working and I always get the message as below
[2014-01-11T16:12:48+00:00] INFO: Processing template[context file for abcd] action create (opsworks_java::context line 16)
[2014-01-11T16:12:48+00:00] DEBUG: Skipping template[context file for abcd] due to only_if ruby block
Can anyone please help on what I am missing with OpsWorks configuration?
You can only configure one datasource using the built-in database.yml. If you want to pass additional information to your environment, please see Passing Data to Applications