how to set up splunk stand alone environment - admin

As I want to create stand-alone where heavyforwarder,indexer,search head in single server.
mostly stand-alone used for staging environment, I want to use stand alone for testing few apps.
How to set-up stand-alone for splunk?

The Splunk has two major products :-
Splunk Enterprise
Splunk Universal Forwarder
List item
Splunk Enterprise - In Splunk Enterprise has Heavy Forwarder, Indexer, Search Head, Deployer , Deployment Server, Cluster Master, License Master as their components.
Splunk Universal Forwarder - It has only Splunk Universal Forwarder.
As Stated in the question, for having these three Splunk Component that is heavyforwarder,indexer,search head in single server, you need to install Splunk Enterprise
You can install Splunk Enterprise from the below link:-
Splunk Enterprise Installation link

A standalone Splunk system is a combined search head and indexer - no forwarders. Installing a forwarder on a Splunk system is redundant.
That said, the installation is very simple. Download the installer and run it.

Related

Does aws ecs distributed load testing support jmeter mqtt sampler

I am performing jmeter distributed load testing, I have been trying to setup aws ecs containers for load testing of mosquitto mqtt.
Does aws distributed load testing support jmeter mqtt sampler plugin
mqtt-jmeter ?
If your test is using any JMeter Plugins - you will need to install the plugin on:
master machine
and all the slave machines
So amend your container building scripts in order to include the plugin and all its dependencies and do this for both master and slave. JMeter Plugins Manager can be executed as a command-line tool
The same applies to any test data like CSV files used in CSV Data Set Config or JMeter Properties
You may find JMeter Distributed Testing with Docker article interesting and get some more ideas on preparing your containers
The anwser is no. aws distriubted testiong work only with http.

Does Google Dataproc support Apache Impala?

I am new to using cloud services and navigating Google's Cloud Platform is quite intimidating. When it comes to Google Dataproc, they do advertise Hadoop, Spark and Hive.
My question is, is Impala available at all?
I would like to do some benchmarking projects using all four of these tools and I require Apache Impala along side Spark/Hive.
No, DataProc is a cluster that supports Hadoop, Spark, Hive and pig; using default images.
Check this link for more information about native image list for DataProc
https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions
You can try also using another new instance of Dataproc, instead of using the default.
For example, you can create a Dataproc instance with HUE (Hadoop User Experience) which is an interface to handle Hadoop cluster built by Cloudera. The advantage here is that HUE has as a default component Apache Impala. It also has Pig, Hive, etc. So it's a pretty good solution for using Impala.
Another solution will be to create your own cluster by the beginning but is not a good idea (at least you want to customize everything). With this way, you can install Impala.
Here is a link, for more information:
https://github.com/GoogleCloudPlatform/dataproc-initialization-actions/tree/master/hue
Dataproc provides you SSH access to the master and workers, so it is possible to install additional software and according to Impala documentation you would need to:
Ensure Impala Requirements.
Set up Impala on a cluster by building from source.
Remember that it is recommended to install the impalad daemon with each DataNode.
Cloud Dataproc supports Hadoop, Spark, Hive, Pig by default on the cluster. You can install more optionally supported components such as Zookeeper, Jyputer, Anaconda, Kerberos, Druid and Presto (You can find the complete list here). In addition, you can install a large set of open source components using initialization-actions.
Impala is not supported as optional component and there is no initialization-action script for it yet. You could get it to work on Dataproc with HDFS but making it work with GCS may require non-trivial changes.

Sitecore 9 Commerce set up in a distributed environment

I am trying to install Sitecore Commerce on Sitecore 9 Update 1 on a distributed environment. It would be great if i could get clarification on two questions.
The xConnect services and Solr runs on different VM's. If I run the powershell script after commenting out the SOLR and xConnect code, how can I manually set it up in these VM's. Is there any guidance available on this.
I would also like to split out Authoring, Minions, Shops, Ops, BizFx and Identity Server onto different VMs as part of a POC on scaling up. But the powershell script for installing Commerce does not do a Role based installation. How can I achieve this with the existing script.

How to use newer versions of HBase on Amazon Elastic MapReduce?

Amazon's Elastic MapReduce tool seems to only have support for HBase v0.92.x and v0.94.x.
The documentation for the EMR AMIs and HBase is seemingly out-of-date and there is no information about HBase on the newest release label emr-4.0.0.
Using this example from an AWS engineer, I was able to concoct a way to install another version of HBase on the nodes, but it was ultimately unsuccessful.
After much trial and error with the Java SDK to provision EMR with better versions, I ask:
Is it possible to configure EMR to use more recent versions of HBase (e.g. 0.98.x and newer?)
After several days of trial, error and support tickets to AWS, I was able to implement HBase 0.98 on Amazon's ElasticMapReduce service. Here's how to do it using the Java SDK, some bash and ruby-based bootstrap actions.
Credit for these bash and ruby scripts goes to Amazon support. They are in-development scripts and not officially supported. It's a hack.
Supported Version: HBase 0.98.2 for Hadoop 2
I also created mirrors in Google Drive for the supporting files incase Amazon pulls them down from S3.
Java SDK example
RunJobFlowRequest jobFlowRequest = new RunJobFlowRequest()
.withSteps(new ArrayList<StepConfig>())
.withName("HBase 0.98 Test")
.withAmiVersion("3.1.0")
.withInstances(instancesConfig)
.withLogUri(String.format("s3://bucket/path/to/logs")
.withVisibleToAllUsers(true)
.withBootstrapActions(new BootstrapActionConfig()
.withName("Download HBase")
.withScriptBootstrapAction(new ScriptBootstrapActionConfig()
.withPath("s3://bucket/path/to/wget/ssh/script.sh"))
)
.withBootstrapActions(new BootstrapActionConfig()
.withName("Install HBase")
.withScriptBootstrapAction(new ScriptBootstrapActionConfig()
.withPath("s3://bucket/path/to/hbase/install/ruby/script"))
)
.withServiceRole("EMR_DefaultRole")
.withJobFlowRole("EMR_EC2_DefaultRole");
"Download HBase" Bootstrap Action (bash script)
Original from S3
Mirror from Google Drive
"Install HBase" Bootstrap Action (ruby script)
Original from S3
Mirror from Google Drive
HBase Installation Tarball (used in "Download HBase" script)
Original from S3
Mirror from Google Drive
Make copies of these files
I highly recommend that you download these files and upload them into your own S3 bucket for use in your bootstrap actions / and scripts. Adjust where necessary.

Activity Monitoring Toolbox missing from WSO2 BAM Server 2.4.0

The documentation of the WSO2 Business Activity Monitor version 2.4.0 refers to an Activity Monitoring Toolbox which is not present in my installation (default configuration on 64bit Linux with 64bit JVM v1.6.0_39).
Can I download and install the Activity Monitoring Toolbox from an external location?
Thanks,
Kai
From BAM 2.4.0 release onwards, the previous BAM activity monitoring components have been deprecated. They were replaced by a new implementation of activity search and monitoring with many more added features.
The following artifacts will no longer be shipped:
With BAM distribution: the activity monitoring sample and the activity monitoring toolboxes.
With BAM data agents: activity monitoring data agent which has so far been available under 'Service Data Publishing'
The newer activity search component has its own Jaggery app which can use to query data directly from Cassandra using indices rather than use Hive scripts for summarising data. It will also be shipped with the BAM distribution by default, thereby negating the need for installation of a dedicated toolbox.
The message tracer will replace the activity data publisher for dumping SOAP payloads to BAM. It will also serve in correlating messages based on ID.
Additional information can be found at: http://docs.wso2.org/display/BAM240/Activity+Monitoring+Dashboard
Activity Monitoring toolbox is packed and installed by default in the BAM 2.4.0 where you do not have to install is again. If you want to install it into a BAM 2.3.0 or an older version it will not be easy as the new Activity Monitoring toolbox in BAM 2.4.0 is not only dependent on the deployable artifacts, Stream definition, Analytic script and the dashboard. But it need some other jar files as well which is in the plugins.