how to configure marvel on found - elasticsearch-marvel

i want to use marvel on found cluster. i have referred to these links https://www.elastic.co/guide/en/marvel/current/configuration.html , https://www.elastic.co/guide/en/shield/current/marvel.html to use it on local. but how to configure it on elastic found ?? marvel agents has to be installed on each cluster to be functional. but how do i do that.
below are the plugins that i have selected in found configuration
there is no option to select marvel in the plugins. marvel dashboard and application can be installed in kibana, but it won't work without the marvel agent.
is it preinstalled by default in found?? if so how do i configure it ??

Related

Jenkins available plugin list empty

I'm using jenkins running in a docker container inside virtual box, with network bridged adapter. I want to install jenkin plugin blue ocean for pipeline manage. The problem is that available plugin list is empty in manage plugin section of jenkins. I searched solutions from Unable to find plugins in list of available plugins in jenkins but none of the proposed solutions works. I am wondering if there's any other ways to fix it. I guess I need to config proxy but i don't know how to config with jenkins running in a container, and i don't understand well the nature of proxy.
Thanks.

How to create a Windows VM in GCP such that we can use it in Jenkins for automated tests

I am looking for a help on GCP where I want to create a Windows VM and which will have Java and some browsers like say Chrome. Once this is done I wanted to integrate this VM to Jenkins such that whenever a automated build runs in Jenkins it will run those automated tests say Selenium on VM machine and creates the reports and so on. Is it possible via GCP. Please let me know and guide me on this and please share any tutorial for sample.
Thanks a lot.
I don't think any of the images provided by GCP have that software installed, I mean you need to install manually or you can use startup-script to automate some of this task,
this is a quick information to get you started:
Create Windows Instance
Install java or JDK
Install chrome
Install jenkins
Automate the task with jenkins and windows
As alternative you can deploy from Marketplace, find the Jenkis which is installed on a Windows VM and then install the other components(chrome & java)
considers that some marketplace solutions has an additional cost

Location of Sqoop installation on Amazon EMR cluster?

I started an EMR cluster in order to use test out sqoop but it turns out it doesnt seem to be installed on the latest version of EMR(5.19.0) as I didnt find it in the directory /usr/lib/sqoop. I tried 5.18.0 as well but it was missing there too.
According to the application versions page, sqoop 1.4.7 should be installed on the cluster.
The EMR console gives me a list of 4 "installations". I chose the Core Hadoop package. It has Hive, Hue, etc installed in /usr/lib. Am I missing something here? It's my first time using EMR or sqoop.
I did not see the "Advanced Options" link at the top of the "Create Cluster" page where I can select individual software to install.
When creating an EMR cluster, use the advanced options link where it allows you to select sqoop.

How to run pdftk on elastic beanstalk

I am trying to run pdftk on an Elastic Beanstalk. The first problem I run into is that I cannot install pdftk on an instance of a Amazon Linux AMI because one of the dependencies (gcj) is not supported.
One of the options I am looking at is creating my own AMI and using that for my Elastic Beanstalk. Amazon recommends not doing this, and there are no community images for EB and Ubuntu.
Another option is using Docker. I am not as familiar with Docker, but I think I would be able to install pdftk in a container and then deploy that to EB. I am using Codeship for deployments and it looks like they have some options for Docker. (This is the options I'm currently exploring)
The last option I can think of is writing a library for encrypting pdfs on my own. I had a look at the encryption specifications for pdfs and I think this is not a time efficient option.
Has any one had a similar problem and found a good solution to the problem?
UPDATE:
After some more research I discovered that the issue was not with Amazon Linux bug with Fedora. Fedora dropped gcj because there was a lack of maintainers on the project, then dropped pdftk because it depends on gcj.
If you need another pdf tool kit I have found podofo to be a good replacement for what I've needed.
First I apologise for resurrecting an old thread! Recently we wanted to create an Elastic Beanstalk worker environment that uses pdftk. Of course we also stumbled on the same issue, so this is what we did and it works for us so far. I hope it'll work for others too.
In the .ebextensions folder add the linked configs:
The needed LaTeX packages:
packages.config
You'll also need to add the el5 library in order to install libgcj.
01_el5_yum.config
Next add this config with the commands to install libgcj, pdftk and pdfjam
02_pdftk.config
And that should be it.
In case anyone comes here having problems with pdftk - poppler-utils also cover some tasks done by pdftk (in my case it was pdf splitting) and can be easily set up on an EB instance through .ebextensions:
packages:
yum:
poppler-utils: []

Configuring AmazonLinux AMI instances

I am trying to setup an AMI such that, when booted it will auto configure itself with a defined "configuration" somewhere on a server. I came across Chef and Puppet. Considering Puppet, I was able to run though their examples but couldn't see one for auto configuration from master. I found out that Puppet Enterprise is not supported on "Amazon Linux". Team chose Amazon Linux and would like keep that instead of going to other OS just because one tool doesn't support it. Can someone please give me some idea about how I could achieve this? (I am trying to stay away from home grown shell scripts over a good industry adopted tool for maintainability)
What I have done in the past is to copy /etc/rc.local to /etc/rc.local.orig, and then configure /etc/rc.local to kick off a puppet run and then pave over itself.
/etc/rc.local:
#!/bin/bash
##
#add pre-puppeting stuff here, I add the hostname in "User-data" when creating the VM so I can set the hostname before checking in
##
/usr/bin/puppet agent --test
/bin/cp -f /etc/rc.local.orig /etc/rc.local
/sbin/init 6
AWS CloudFormation is one of Amazon's recommended ways to provision servers (and other cloud resources, too). You declare all the resources you need in a JSON file, and specify how to provision each server by declaring packages to install, services to run, files to create, and commands to run when the server is created. See the user guide for more information. I also wrote a couple of blog posts about getting started with it.