AWS - trying to install the kopf plugin for Elasticsearch on a EC2 instance - amazon-web-services

I have a ubuntu EC2 instance in AWS.
I already installed Java and Elasticsearch, and now i'm trying to install Kopf so i can manage my nodes using the web UI.
However, when i try to install it using:
bin/elasticsearch-plugin install lmenezes/elasticsearch-kopf
when i'm in the:
/usr/share/elasticsearch directory, i get the error:
ERROR: Unknown plugin lmenezes/elasticsearch-kopf
What am i doing wrong?
(my Elasticsearch version is 6.2.2)
Thanks!

https://github.com/lmenezes/elasticsearch-kopf
Kopf is no longer maintained. A replacement(cerebro) has been
developed and is currently maintained at
https://github.com/lmenezes/cerebro. At this point, cerebro should be
pretty much feature equivalent of kopf, with a few new features on
top.

Related

Node.JS native addons on LINUX [duplicate]

I'm using AWS Lambda, which involves creating an archive of my node.js script, including the node_modules folder and uploading that to their infrastructure to run.
This works fine, except when it comes to node modules with native bindings (using node-gyp). Because the binding was complied and project archived on my local computer (OS X), it is not compatible with AWS's (Amazon Linux) servers.
How can I cross-compile/install a node module (specifically, node-sqlite3) so when I upload it to another server arch it runs?
While not really a solution to your problem, a very easy workaround could be to simply compile the native addons on a Linux machine.
For your particular situation, I would use Vagrant. Vagrant can create virtual machines and configure them within seconds.
Find an OS image that resembles Amazon's Linux distro (Fedora, CentOS, others that use yum as package manager - see Wiki)
Use a simple configuration script that, when run by Vagrant on machine startup, will run npm install (optionally it might also remove the node_modules folder before to ensure a clean installation)
For extra comfort, the script can also create the zip file for deployment
Once the installation finishes, the script will shutdown the VM to avoid unnecessary consumption of system resources
Deploy!
It might require some tuning if the linked libraries are not at the same place on the target machine but generally this seems to me like the best and quickest solution.
While installing the app using Vagrant might be sufficient in some cases, I have found it necessary to build the app on Linux which is as close to Lambda's Amazon Linux AMI as possible.
You can read the original answer here: https://stackoverflow.com/a/34019739/303184
Steps to make it work:
Spawn new EC2 instance. Make sure it is based on exactly the same image as your AWS Lambda runtime. You can review Lambda env details here: http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html. In our case, it was Amazon Linux AMI called amzn-ami-hvm-2015.03.0.x86_64-gp2.
Install nvm and use it to install the same version of Node.js as on the AWS Lambda. At the time of writing this, it was v0.10.36. You can refer to http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html again to find out.
You will probably need to install git & g++ compiler on the EC2. You can do this running
sudo yum install git gcc-c++
Finally, clone your app to your new EC2 and install your app's dependecies:
nvm use 0.10.36
npm install --production
You can then easily download the node_modules using scp or such.
Same lines as Robert's answer, when I had to work on my MAC in a different OS I use vm ware like Oracle's free virtualizer VirtualBox to get a linux on my mac, no cost to me. Or sign up for a new AWS account, you get a micro for a year free. Use that to get your linux box, do whatever you need there.
AWS has a page describing how to deal with native NPM modules: https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/

AWS Elastic Beanstalk Python 3.7 Deployment Location

I'm trying to upgrade an existing application on AWS from the now deprecated Python 3.4 platform to 3.7 with Amazon Linux 2/3.0.1, and in the process I ran into an issue with where the application source code is deployed on the EC2 instance.
From some empirical testing, I found that instead of /opt/python/current/app directory that most if not all AWS documentations say (e.g Troubleshooting issues with the EB CLI - AWS Elastic Beanstalk), with Python 3.7 it is actually deployed in /var/app/current/. I wasn't able to find any documentation regarding this change, and it is causing some issues with the application. I'm wondering is there any reason that this change is made? And if it is possible to revert it, how to do so?
Thanks in advance!
This is because the 3.7 Python Elastic Beanstalk distribution uses Amazon Linux 2 which is fundamentally different from the AMI predecessor. If you opt to use Python 3.6 instead you should be able to avoid this issue, as it runs on the earlier Linux version where deployments still occur in /opt/var/app/current. Most tutorials I've found are designed to work with this older rollout, including the most up-to-date Amazon start guide.
If you have the time, try migrating your code to the newer version, as this seems to be the workflow Amazon is embracing going forward, for all newer versions of Python (such as 3.8 and others yet to come).

Got stuck while installing pool package in in aws ubuntu server

I am trying to install pool package for my shiny app in ubuntu AMI in aws.
I used command
** R -e "install.packages('pool', repos='http://cran.rstudio.com/')“**
but it got stuck.
Is there any solution apart from upgrading the instance t2.micro(which is under free tier) to some high level as i don't want to get bill from aws

Incompatible Windows docker image in AWS ECS

I have created a standard Windows cluster in AWS Elastic Container Services (ECS) and am trying to deploy an ASP.Net docker image (microsoft/aspnet:4.7.1-windowsservercore-1709) to it and get the following error
Status reason CannotPullContainerError: a Windows version
10.0.16299-based image is incompatible with a 10.0.14393 host
My application is a ASP.Net WebAPI application using .Net Framework 4.6.1.
My docker file is
FROM microsoft/aspnet:4.7.1-windowsservercore-1709
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Can anyone suggest what image I could deploy?
Thanks
Change your FROM to aspnet:4.7.1-windowsservercore-ltsc2016 and it should resolve your issue. Keep in mind the image size for this tag is considerably larger than 1709.
We also got the following message when using AWS ECS:
CannotPullContainerError: a Windows version 10.0.16299-based image is incompatible with a 10.0.14393 host
After a lot of trial and error we found that we were using .NetCore SDK 2.2 and AWS ECS wants 2.1. The developer made changes in Visual Studio 2017 and to the Dockerfile to reference 2.1 instead of 2.2. Once that was done ECS was able to consume it and we had a running state.
Unfortunately the error was not as descriptive and we went down the rabbit hole before discovering what really was our problem.

Update Parse-Server-Example

I thought it would be a topic to find easily on the web, but I couldnt find a solution..
I deployed the parse-server-example on AWS Elastic Beanstalk according to the original documentation and it works perfectly. Can anyone give me a hint how to update this server to the newest version? I try to use the parse-dashboard and I get the error "server version too low".
I cloned the parse server with eb cli already. But I do not know how / which files to update.
Thanks for any hint!
In package.json, you update the version next to 'parse-server'. I think by default this is '~2.0'?
Parse Dashboard requires Parse-Server to be '>=2.1.4', HOWEVER, currently I'm having issues when changing the parse-server version, it breaks my AWS server instance. Currently have an issue open on GitHub (https://github.com/ParsePlatform/parse-server-example/issues/109#issuecomment-198001722), so keep an eye on that.
But yeah, that's where you update your Parse-Server version, I believe!
Once you've done this locally on your machine, you obviously need to deploy the updates to AWS via the Beanstalk Dashboard, as this will install/update any node modules from package.json.