Why do I get an undocumented value for MSBTS_HostInstance ClusterInstanceType? - wmi

Why do I get an undocumented value for MSBTS_HostInstance ClusterInstanceType?
I'm getting values of "3" for the ClusterInstanceType when I do a query of the BizTalk cluster ...
Get-CIMInstance -ClassName MSBTS_HostInstance -NameSpace root\MicrosoftBizTalkServer
According to the MSDN documentation, the expected values are ...
0 - UnClusteredInstance
1 - ClusteredInstance
2 - ClusteredVirtualInstance
So what is the "3", and what does it mean?

According to this script here Error Attempting to Bounce Clustered Host, 3 is HostIsClusteredManager (cluster manager node)

Related

Creating VM instance from machine image using REST API

I am struggling to create a VM instance using machine image from the REST API.
I can create an instance using 'Try this API' from https://cloud.google.com/compute/docs/reference/rest/beta/instances/insert
{
"name": 'demo-x2',
"projects": 'resonant-time-282213',
"zone" : 'asia-east1-c',
"sourceMachineImage" : 'projects/resonant-time-282213/global/machineImages/t4-mtml-1',
"machineType" : 'projects/resonant-time-282213/zones/asia-east1-c/machineTypes/n1-standard-8'
}
While using it inside a python code, it shows the following error in the terminal:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://compute.googleapis.com/compute/v1/projects/resonant-time-282213/zones/asia-east1-c/instances?alt=json returned "Invalid value for field 'resource.disks': ''. No disks are specified.". Details: "Invalid value for field 'resource.disks': ''. No disks are specified.">
Which disk info is it looking for? The disk details are already in the machine image.
It looks like this is only available in the "beta" channel right now.
So when you build your object your have to use "beta" instead of "V1" like this:
service = discovery.build('compute', 'beta', credentials=credentials)

why WSO2 ID token returns 'groups' attributes as a comma seperated role string instead of list of role

I've two WSO2 IS 5.7.0 environments. The id tokens returned by https://localhost:9443/oauth2/token have slight difference. the first environment: the 'groups' attribute has values like the following:
"groups": [
"FOTA_WEB_View_User",
"FOTA_Engineer",
"FOTA_Manager",
"FOTA_WEB_Admin",
"Internal/everyone",
"_login",
"FOTA_APP"
]
but in the second environment, the 'groups' attribute has values like the following:
"groups": "BDA-AA-Flameout-Download,BDA-Diag-TempSensor-Download,BDA_Admin,BDA-AA-Superknock-Download,Internal/everyone,_login,BDA-AA-Flameout-View,BDA-AA-Superknock-View"
Actually, the first is expected.
The configration seems the same, i.e. add a new service provider, and then add
Requested Claims. see below pictures.
A possible way to happen this is not configuring the MultiAttributeSeparator property in user-mgt.xml (/repository/conf/user-mgt.xml) file. This property is available in all UserStoreManager classes. In this case we need to set the value for the MultiAttributeSeparator property to comma (,) in the JDBCUserStoreManager properties since the user store is a JDBC database (MYSQL).

Pubnub functions not working on AWS Lambda

I'm trying to use history method provided by Pubnub to get the chat history of a channel and running my node.js code on AWS Lambda. However, my function is not getting called. I'm not sure if I'm doing it correctly, but here's the code snippet-
var publishKey = "pub-c-cfe10ea4-redacted";
var subscribeKey = "sub-c-fedec8ba-redacted";
var channelId = "ChatRoomDemo";
var uuid;
var pubnub = {};
function readMessages(intent,session,callback){
pubnub = require("pubnub")({
publish_key : publishKey,
subscribe_key: subscribeKey
});
pubnub.history({
channel : 'ChatRoomDemo',
callback : function(m){
console.log(JSON.stringify(m));
},
count : 100,
reverse : false
});
}
I expect the message history in JSON format to be displayed on the console.
I had the same problem and finally got it working. What you will need to do is allow the CIDR address for pubnub.com. This was a foreign idea to me until I figured it out! Here's how to do that to publish to a channel:
Copy the CIDR address for pubnub.com which is 54.246.196.128/26 (Source) [WARNING: do not this - see comment below]
Log into https://console.aws.amazon.com
Under "Services" go to "VPC"
On the left, under "Security," click "Network ACLs"
Click "Create Network ACL" give it a name tag like "pubnub.com"
Select the VPC for your Lambda skill (if you're not sure, click around your Lambda function, you'll see it. You probably only have one listed like me)
Click "Yes, Create"
Under the "Outbound Rules" tab, click "Edit"
For "Rule #" I just used "1"
For "Type" I used "HTTP (80)"
For "Destination" I pasted in the CIDR from step 1
"Save"
Note, if you're subscribing to a channel, you'll also need to add an "Inbound Rule" too.

Logstash keeps overwriting Elasticsearch index template

I am sending log data to Elasticsearch database using Logstash. I wanted to change the number of shards from 3 to 1 and issued the following command via ES REST API:
PUT server_name/_template/logstash
{
"template": "logstash",
"settings": {
"index.number_of_replicas": "0",
"index.refresh_interval": "5s",
"index.number_of_shards": "1"
}
}
The server responsed OK and if I issue GET _template/logstash I can see that the number of shards is now set to 1.
Then I start logstash with an output set to ship logs to Elasticsearch. There are not template-related settings. After I send log data I see that the number of shards is set back to its default value (3).
I even tried to override it by referring template from Logstash configuration file. Nope, whatever I specify the settings are reset back. It looks like Logstash keeps on overwriting Elasticsearch index settings with some defaults, and I can't figure out how to disable this.
UPDATE. I've added the following lines to the Logstash config file but it didn't help:
manage_template => false
template_overwrite => true
Also tried template_overwrite set to false. And I tried two different ways of setting number of shards in the JSON file:
{
"logstash": {
"template": "logstash-*",
"settings": {
"index.number_of_replicas": "0",
"index.refresh_interval": "5s",
"index.number_of_shards": "1"
}
}
}
and
{
"template": "logstash-*",
"settings" : {
"index.number_of_shards" : 1,
"index.number_of_replicas" : 0,
}
}
On you elasticsearch {} element in your Logstash configuration, you need to add manage_template => false if you want to manage the template outside of logstash.
OK, after many hours I've found the following:
The main reason for overwriting Logstash template was that I had two Elasticsearch parts in Logstash output section with conditional selection (if/else). Even though I correctly configured index settings in the first ("if") part, as soon as Logstash encountered logs that satisfied the other ("else") part, it used default template that overwrote the custom one.
"manage_template" option is important. It has to be set to "true" for the configuration block that refers to the custom template settings file, and "false" for the block that shouldn't overwrite the custom Logstash template.

WSO2 CEP siddhi Filter issue

I am trying to use the siddhi query langage but it seems I am misusing it.
I have some events with the following streamdef :
{ 'name':'eu.ima.stat.events', 'version':'1.1.0', 'nickName': 'Flux event Information', 'description': 'Details of Analytics Statistics', 'metaData':[ {name:'HostIP','type':'STRING'} ], 'correlationData':[ {name:'ProcessType','type':'STRING'}, {name:'Flux','type':'STRING'}, {name:'ReferenceId','type':'STRING'} ], 'payloadData':[ {'name':'Timestamp','type':'STRING'}, {'name':'EventCode','type':'STRING'}, {'name':'Type','type':'STRING'}, {'name':'EventInfo','type':'STRING'} ]}
I am just trying to filter events with the same processus value and the same flux value using a query like this one :
from myEventStream[processus == 'SomeName' and flux == 'someOtherName' ]
insert into someStream
processus, flux, timestamp
Whenever I try this, no output is generated. When I get rid of the filter
from myEventStream
insert into someStream
processus, flux, timestamp
all my events are ther in the output.
What's wrong with my query ?
I can see some spell mistakes in your query... In the filter you have used a variable name called "processus" which is not in the event stream. That is why this query does not give any output. When you are creating a bucket in WSO2 CEP, make sure that the bucket is deployed correctly in the CEP server and check in the management console.(CEP BUCKETS --> List).
On your situation. bucket will not be deployed because of the wrong configuration and also there will be error messages printed in the terminal where CEP server runs. After correcting this mistake your query will run perfectly without any issue...
Regards,
Mohan
Considering Mohan's answer,rename 'ProcessType' or change your query like this
from myEventStream[ ProcessType == 'SomeName' and flux == 'someOtherName' ]
insert into someStream
ProcessType, flux, timestamp