Check which ebextensions have been run - amazon-web-services

I have an environment in Amazon Elastic Beanstalk that is proving problematic. I'm trying to check which ebextensions have run because there is an oddity where I see a logrotate conf file created but the contents are not what I've written.
Does anyone know which log file I can find that information in? I've tried /var/log/eb-engine.log but that doesn't seem to have anything about running ebextensions. eb-activity, and eb-commandprocessor are mentioned in the docs but they don't exist on the instance.
Platform: Ruby 2.6 running on 64bit Amazon Linux 2/3.0.3

As Amazon Linux 2 is still being worked on some log files aren't available yet. So says AWS Support.
However, you can see which ebextensions have been run by looking in cfn-init.log, and cfn-init-cmd.log.
I was searching for the file names rather than the command names so I couldn't see where the results were being logged.

Related

AWS Batch Failing to launch Dockerfile - standard_init_linux.go:219: exec user process caused: exec format error

I am attempting to use AWS Batch to launch a linux server, which will in essence perform the fetch and go example included within AWS (to download a SH from S3 and run it).
Does AWS Batch work at all for anyone?
The aws fetch_and_go example always fails, even followed someone elses guide online which mimicked the aws example.
I have tried creating Dockerfile for amazonlinux:latest and ubuntu:20.04 with numerous RUN and CMD.
The scripts always seem to fail with the error:
standard_init_linux.go:219: exec user process caused: exec format error
I thought at first this was relevant to my deployment access rights maybe within the amazonlinux so have played with chmod 777, chmod -x etc on the she file.
The final nail in the coffin, my current script is litterely:
FROM ubuntu:20.04
Launch this using AWS Batch, no command or parameters passed through and it still fails with the same error code. This is almost hinting to me that there is either a setup issue with my AWS Batch (which im using default wizard settings, except changing to an a1.medium server) or that AWS Batch has some major issues.
Has anyone had any success with AWS Batch launching their own Dockerfiles ? Could they share their examples and/or setup parameters?
Thank you in advance.
A1 instances are ARM based first-generation Graviton CPU. It is highly likely the image you are trying to run something that is expecting x86 CPU (Intel or AMD). Any instance class with a "g" in it ("c6g" or "m5g") are Graviton2 which is also ARM based and will not work for the default examples.
You can test whether a specific container will run by launching an A1 instance yourself and running the container (after installing docker). My guess is that you will get the same error. Running on Intel or AMD instances should work.
To leverage Batch with ARM your containerized application will need to work on ARM. If you point me to the exact example, I can give more details on how to adjust to run on A1 or Graviton2 instances.
I had the same issue, and it was because I build the image locally on my M1 Mac.
Try adding --platform linux/amd64 to your docker build command before pushing if this is your case.
In addition to the other comment. You can create multi-arch images yourself which will provide the correct architecture.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

How could i deploy my Cloud Code to AWS Elastic Beanstalk? (Parse Server)

I am struggling about how to upload my Cloud Code files that i had on Parse.com to my Parse Server hosted on AWS EB.
So far i have:
Parse Server hosted on AWS EB. To host it on AWS i used the Orange Deploy Button which basically makes all stuff easier for people without having to install the Parse Server locally and upload it later to AWS.
iOS App written in objective C connected to the Parse server and working perfectly
Parse Dashboard locally on my mac connected to the Parse Server on AWS
The only thing that i would need is to upload all my cloud code files to the Parse Server. How could i do this? I have researched a lot over Google, stackoverflow, etc without success. There is some information but its unclear. Thanks in advance.
Finally and thanks to Ran Hassid i now have a Fully functional Parse Server on AWS with Cloud Code. For those who are in the same situation where i was, here is the answer to my question:
Go to this link here and follow all the steps (By the time i asked the question, the information provided by this link of AWS wasn't that clear as it is now. They improved the explanations and the info.)
After you finish all the previous steps from the link. You would have a Parse Server on AWS working.
Now the part of CLOUD CODE. Just create a folder in your MAC or PC wherever you like. Let's say on the desktop and called it Parse Server AWS (You can call it whatever you want)
Install the EB CLI which is the Command line interface to user Terminal (On Mac) or the equivalent on windows to work with the parse server you just set up on AWS (Similar to CloudCode with Parse CLI). The easy way to install it is running this command:
brew install awsebcli
Now open terminal on mac (or the equivalent on windows) and go to the folder that you just created on the step 3.
Run the next command. It will ask you to select the location of your parse server, and then the name.
eb init
Now this command. It will download all the files from AWS of your parse server to this folder you are in.
eb labs download
Finally, you will have a folder called Cloud where you can put all your cloud code files in.
When you finish just run the command:
eb deploy
Now you have your parse server with all your cloud code files working on AWS.
Now any change you need to make to your cloudCode files, just change the local files inside this folder just created on step 3 and run again the command from the step 9. Just exactly as you used to do with Parse Deploy command
Hopefully this information will help many people as it helped to me.
Have a happy coding!
parse-server cloud code is a bit different from Parse.com cloud code. In Parse.com we use the Parse CLI in order to modify and deploy our cloud code (parse deploy ...) in parse-server your cloud code exist under the following path of your parse project ./cloud/main.js* so your cloud code endpoint is the main.js file which by default located under the **cloud folder of your parse project. If you really want you can change this path but to keep it simple use the default location.
Now about deployment. in parse-server you need to redeploy your parse server again when you do some modification to your cloud code. Another option is to edit your cloud code remotely but from my POV its better to redeploy it

How to run pdftk on elastic beanstalk

I am trying to run pdftk on an Elastic Beanstalk. The first problem I run into is that I cannot install pdftk on an instance of a Amazon Linux AMI because one of the dependencies (gcj) is not supported.
One of the options I am looking at is creating my own AMI and using that for my Elastic Beanstalk. Amazon recommends not doing this, and there are no community images for EB and Ubuntu.
Another option is using Docker. I am not as familiar with Docker, but I think I would be able to install pdftk in a container and then deploy that to EB. I am using Codeship for deployments and it looks like they have some options for Docker. (This is the options I'm currently exploring)
The last option I can think of is writing a library for encrypting pdfs on my own. I had a look at the encryption specifications for pdfs and I think this is not a time efficient option.
Has any one had a similar problem and found a good solution to the problem?
UPDATE:
After some more research I discovered that the issue was not with Amazon Linux bug with Fedora. Fedora dropped gcj because there was a lack of maintainers on the project, then dropped pdftk because it depends on gcj.
If you need another pdf tool kit I have found podofo to be a good replacement for what I've needed.
First I apologise for resurrecting an old thread! Recently we wanted to create an Elastic Beanstalk worker environment that uses pdftk. Of course we also stumbled on the same issue, so this is what we did and it works for us so far. I hope it'll work for others too.
In the .ebextensions folder add the linked configs:
The needed LaTeX packages:
packages.config
You'll also need to add the el5 library in order to install libgcj.
01_el5_yum.config
Next add this config with the commands to install libgcj, pdftk and pdfjam
02_pdftk.config
And that should be it.
In case anyone comes here having problems with pdftk - poppler-utils also cover some tasks done by pdftk (in my case it was pdf splitting) and can be easily set up on an EB instance through .ebextensions:
packages:
yum:
poppler-utils: []

WebHCat on Amazon's EMR?

Is it possible or advisable to run WebHCat on an Amazon Elastic MapReduce cluster?
I'm new to this technology and I was wonder if it was possible to use WebHCat as a REST interface to run Hive queries. The cluster in question is running Hive.
I wasn't able to get it working but WebHCat is actually installed by default on Amazon's EMR instance.
To get it running you have to do the following,
chmod u+x /home/hadoop/hive/hcatalog/bin/hcat
chmod u+x /home/hadoop/hive/hcatalog/sbin/webhcat_server.sh
export TEMPLETON_HOME=/home/hadoop/.versions/hive-0.11.0/hcatalog/
export HCAT_PREFIX=/home/hadoop/.versions/hive-0.11.0/hcatalog/
/home/hadoop/hive/hcatalog/webhcat_server.sh start
You can then confirm that it's running on port 50111 using curl,
curl -i http://localhost:50111/templeton/v1/status
To hit 50111 on other machines you have to open the port up in the EC2 EMR security group.
You then have to configure the users you going to "proxy" when you run queries in hcatalog. I didn't actually save this configuration, but it is outlined in the WebHCat documentation. I wish they had some concrete examples there but basically I ended up configuring the local 'hadoop' user as the one that run the queries, not the most secure thing to do I am sure, but I was just trying to get it up and running.
Attempting a query then gave me this error,
{"error":"Server IPC version 9 cannot communicate with client version
4"}
The workaround was to switch off of the latest EMR image (3.0.4 with Hadoop 2.2.0) and switch to a Hadoop 1.0 image (2.4.2 with Hadoop 1.0.3).
I then hit another issues where it couldn't find the Hive jar properly, after struggling with the configuration more, I decided I had dumped enough time into trying to get this to work and decided to communicate with Hive directly (using RBHive for Ruby and JDBC for the JVM).
To answer my own question, it is possible to run WebHCat on EMR, but it's not documented at all (Googling lead me nowhere which is why I created this question in the first place, it's currently the first hit when you search "WebHCat EMR") and the WebHCat documentation leaves a lot to be desired. Getting it to work seems like a pain, though my hope is that by writing up the initial steps someone will come along and take it the rest of the way and post a complete answer.
I did not test it but, it should be doable.
EMR allows to customise the bootstrap actions, i.e. the scripts run where the nodes are started. You can use bootstrap actions to install additional software and to change the configuration of applications on the cluster
See more details at http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan-bootstrap.html.
I would create a shell script to install WebHCat and test your script on a regular EC2 instance first (outside the context of EMR - just as a test to ensure your script is OK)
You can use EC2's user-data properties to test your script, typically :
#!/bin/bash
curl http://path_to_your_install_script.sh | sh
Then - once you know the script is working - make it available to the cluster on a S3 bucket and follow these instructions to include your script as custom bootstrap action of your cluster.
--Seb

Configuring AmazonLinux AMI instances

I am trying to setup an AMI such that, when booted it will auto configure itself with a defined "configuration" somewhere on a server. I came across Chef and Puppet. Considering Puppet, I was able to run though their examples but couldn't see one for auto configuration from master. I found out that Puppet Enterprise is not supported on "Amazon Linux". Team chose Amazon Linux and would like keep that instead of going to other OS just because one tool doesn't support it. Can someone please give me some idea about how I could achieve this? (I am trying to stay away from home grown shell scripts over a good industry adopted tool for maintainability)
What I have done in the past is to copy /etc/rc.local to /etc/rc.local.orig, and then configure /etc/rc.local to kick off a puppet run and then pave over itself.
/etc/rc.local:
#!/bin/bash
##
#add pre-puppeting stuff here, I add the hostname in "User-data" when creating the VM so I can set the hostname before checking in
##
/usr/bin/puppet agent --test
/bin/cp -f /etc/rc.local.orig /etc/rc.local
/sbin/init 6
AWS CloudFormation is one of Amazon's recommended ways to provision servers (and other cloud resources, too). You declare all the resources you need in a JSON file, and specify how to provision each server by declaring packages to install, services to run, files to create, and commands to run when the server is created. See the user guide for more information. I also wrote a couple of blog posts about getting started with it.