I need some help on how to actually fix these encoding being outputted as special characters in an xml.I'm trying to execute a powershell script to create a certificate using a platform via an SSH connection on a server and the expected response from it is a clean xml form but unfortunately this special character issue is happening.I might need to do an xslt and regex as a workaround to fix the xml:
<output>
<line index="1">(B)0[?7l[H[J[3;1H Directory: D:\Certificatesdev[6;1HMode </line>
<line index="2">[24;1H [24;1H-----------------------------------------------------------------------------[24;1H</line>
<line index="3">[24;1H [24;1HSign Certificate: Successful[24;1H</line>
<line index="4">[24;1H [24;1H-----------------------------------------------------------------------------[24;1H</line>
<line index="5">[24;1H [24;1HEvent Summary:[24;1H</line>
<line index="6">[24;1H [24;1H*Generate Sign Certificate File: Passed[24;1H</line>
<line index="7">[24;1H [24;1H</line>
<line index="8">[24;1H [23;1HGenerated D:\Certificatesdev\Folder\test.com-20150511-210523.[24;1Hcer[24;1H</line>
<line index="9">[24;1H [24;1H*Get Actual Certificate Expiry Date: Passed[24;1H</line>
<line index="10">[24;1H [24;1H*Copy to Shared Folder: Passed[24;1H</line>
<line index="11">[24;1H [24;1H</line>
<line index="12">[24;1H [23;1HFile Location: D:\Certificatesdev\Folder\test.com-20150511-21[24;1H0523[24;1H</line>
<line index="13">[24;1H [24;1H*Add to Certificate DB: Successful[24;1H</line>
<line index="14">[24;1H [24;1HCertificate Automation Completed Successfully[24;1H</line>
<line index="15">[24;1H [24;1H</line>
<line index="16">[24;1H [24;1H</line>
<line index="17">[24;1H [24;1H[?7h</line>
</output>
Note that the extra spaces are not expected as well. I am not quite sure what kind of encoding is this.
Please Help. Thank you
I'm denis bider from Bitvise. My attention is called to this because I understand the issue you're experiencing happens when connecting to a version of WinSSHD or Bitvise SSH Server.
If I understand correctly, you're connecting to the SSH Server in order to receive from it an XML type of file. It looks like the way you are connecting to the SSH Server and retrieving data uses either an exec request, or a terminal shell session.
The issue appears to be that when you receive data on the client side, you are expecting a plain stream of data, but what you're receiving instead are terminal escape sequences. This appears to be because your SSH client, when connecting to the server, requests terminal emulation.
The best way to resolve this issue would be either to:
change SSH client parameters so that it does NOT request a terminal (such as vt100 or xterm) when connecting to the server, or that it requests the "dumb" terminal type instead;
or alternately, if the SSH client is of a type that does not allow changing the terminal being requested, the SSH Server supports a setting that allows you to force it to use the "dumb" terminal type (no escape sequences), no matter what the client requests.
If you are looking to change this via the server-side setting, you can find it in SSH Server settings, either in the account or group settings entry that contains settings for this user. The name of the setting is "Always use 'dumb' pseudo terminal".
Related
i need help regarding enabling multiQ (multiple queue ) for port in DPDK testpmd , when i use in command like --rxq=2 or --txq=N , it shows max_rx_queue =1 and i am not able to configure for more queue . please help me how can i configure multiple queue .
one can modify kvn-qemu either in cmdline or XML to reflect the desired number of virtio queues.
XML:
<interface type='vhostuser'> <mac address='00:00:00:00:00:01'/> <source type='unix' path='/usr/local/var/run/openvswitch/dpdkvhostuser0' mode='client'/> <model type='virtio'/> <driver queues='2'> <host mrg_rxbuf='off'/> </driver> </interface>
P.S.: based on the update from the comment the VM environment is hosted over Virtualbox. Request for an update.
I'm trying to overwrite a rule as per documentation, like this
https://documentation.wazuh.com/3.12/learning-wazuh/replace-stock-rule.html
So I've copied one rule to local_rules.xml, created my own group (prior to that also tried to put it within the rule's original group tag), but it seems to be completely ignoring it:
This is what I've put in local_rules.xml:
<group name="istvan">
<rule frequency="8" id="31533" level="9" overwrite="yes" timeframe="20">
<if_matched_sid>31530</if_matched_sid>
<same_source_ip/>
<description>High amount of POST requests in a small period of time (likely bot).</description>
<group>pci_dss_6.5,pci_dss_11.4,gdpr_IV_35.7.d,nist_800_53_SA.11,nist_800_53_SI.4,</group>
</rule>
</group>
I've only changed the level to 9 and added the overwrite="yes" tag. The idea is that it doesn't send me this alerts (as my treshold is set to level 10+), save, restart, but it's completely ignoring it, and I'm stil getting those alerts with level 10 tag.
Frankly, I'm starting to be clueless why is it happening.
Any ideas?
Thanks.
A good way to test the expected behaviour would be using /var/ossec/bin/ossec-logtest as mentioned in that doc.
To elaborate i will take the example of that doc :
I will overwrite the rule 5716 : https://github.com/wazuh/wazuh-ruleset/blob/317052199f751e5ea936730710b71b27fdfe2914/rules/0095-sshd_rules.xml#L121, as below :
[root#localhost vagrant]# egrep -iE "ssh" /var/ossec/etc/rules/local_rules.xml -B 4 -A 3
<rule id="5716" overwrite="yes" level="9">
<if_sid>5700</if_sid>
<match>^Failed|^error: PAM: Authentication</match>
<description>sshd: authentication failed.</description>
<group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.1,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,</group>
</rule>
The logs can be tested without having to restart the Wazuh manager, Opening /var/ossec/bin/ossec-logtest then pasting my log :
2020/05/26 09:03:00 ossec-testrule: INFO: Started (pid: 9849).
ossec-testrule: Type one log per line.
Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2
**Phase 1: Completed pre-decoding.
full event: 'Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2'
timestamp: 'Oct 23 17:27:17'
hostname: 'agent'
program_name: 'sshd'
log: 'Failed password for root from ::1 port 60164 ssh2'
**Phase 2: Completed decoding.
decoder: 'sshd'
dstuser: 'root'
srcip: '::1'
srcport: '60164'
**Phase 3: Completed filtering (rules).
Rule id: '5716'
Level: '9'
Description: 'sshd: authentication failed.'
As expected the level has been overwriting which was initially 5. Although in your case, you will have to paste the log 8 times in timeframe lower than 20 s to be able to trigger that rule.
If you can share the logs triggering that alert, i can test with it.
On the other hand, you can create a sibling rule to simply ignore your rule 31533, something similar to below :
<rule id="100010" level="2">
<if_sid>31533</if_sid>
<description>Ignore rule 31533</description>
</rule>
Make sure to restart the Wazuh manager afterward to apply the change.
You can find more information about customizing rules/decoders here : https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/
Hope this helps,
After finally talking to the developers, it turns out that it was indeed ignoring local_rules.xml. I had a strage exclusion of one rule (probably a problematic syntax, although it did't report an error)
"rule_exclude": [
"31151"
When I removed it, it started working as described in the user's guide.
Using testng factory and Data provider annotations we have set of test cases that needs to be executed in parallel using selenium grid. As of now we have say for example three AWS instances with the IPs required. For now, we are able to run set of cases in parallel in single AWS instance. i.e able to run set of 30 cases in parallel in single instance.
<?xml version="1.0"?>
<suite name="reg_tests" parallel="tests" thread-count="90">
<test name="sanity_01" parallel="instances" thread-count="30">
<classes>
<class name="com.X.Y"/>
</classes>
</test>
<test name="sanity_02" parallel="instances" thread-count="30">
<classes>
<class name="com.X.Y1"/>
</classes>
</test>
<test name="sanity_03" parallel="instances" thread-count="30">
<classes>
<class name="com.X.Y2"/>
</classes>
</test>
</suite>
Have properties file where we get the IP of the machine where we want to run which is obviously pointing to single AWS machine.
WebDriver driver = new RemoteWebDriver(new URL(url),
desiredCapabilities);
url - IP of the AWS machine.
So, the above code directs to run in single machine. So, now is there a way to ask the selenium grid to run in all three Grid machines which are already set up for executing test cases. Since the thread maintenance are managed internally, can this be done?
Yes of course. But it is depend on the hub.
The node must able to register to the hub successfully.
Note that your selenium code target to hub only not to node and hub then decide to whom they need to redirect as per the capability set by you.
For an example if your sanity_01 having capabilities of chrome and when you target to the hub, the hub understood the capablity and redirect your code to node machine/ec2 which is register for chrome
baseURL = "http://demo.xyz.com/test/";
hubURL = "http://192.168.43.223:4444/wd/hub";
DesiredCapabilities capability = DesiredCapabilities.chrome();
capability.setBrowserName("chrome");
capability.setPlatform(Platform.WIN10);
driver = new RemoteWebDriver(new URL(hubURL), capability);
In above code the hub is hubURL = "http://192.168.43.223:4444/wd/hub"; and as capability is set to chrome, it will send it to chrome node.
if 2 chrome node added to hub then it will redirect to anyone as per the node availability
Is the Azure Service Fabric Reverse Proxy available in an on-premises cluster? If so, how can I enable it for an existing cluster?
The Service Fabric Reverse Proxy is described here. It allows clients external to the cluster to access application services by name with a special URL, without needing to know the exact host:port on which an instance of the service is running (which may change as services are automatically moved around).
By default the Service Fabric Reverse Proxy does not appear to be enabled for my on-prem cluster with two instances of a stateless service. I tried using the documented port 19008 but could not reach the service using the recommended URI syntax.
To wit, this works:
http://fqdn:20001/api/odata/v1/$metadata
but this does not:
http://fqdn:19008/MyApp/MyService/api/odata/v1/$metadata
In the NodeTypes section of the ClusterConfig JSON used to set up my on-prem cluster, there is a property "httpGatewayEndpointPort": "19080", but that port does not appear to work as a reverse proxy (it is the Service Fabric Explorer web-app endpoint). I am guessing that the needed configuration is specified somehow in the cluster config JSON. There are instructions in the referenced article that explain how to configure the reverse proxy in the cloud, but not on-premises.
What I am looking for are instructions on how to set up the Service Fabric reverse proxy in an on-premises multi-machine cluster or dev cluster.
Yes, the reverse proxy is available on-premises.
To get it working for an existing cluster, it must be configured and enabled in the cluster config XML and then the new config must be deployed, as described below.
For a new cluster, set it up in the cluster config JSON before creating the cluster, as described by #Scott Weldon.
#Senj provided the clue (thanks!) that led me to the answer. I had recently updated my Service Fabric bits on my dev box to 5.1.163.9590. When I looked in C:\SfDevCluster\Data\FabricHostSettings.xml, I noticed the following:
<Section Name="FabricNode">
...
<Parameter Name="NodeVersion" Value="5.1.163.9590:1.0:0" />
...
<Parameter Name="HttpApplicationGatewayListenAddress" Value="19081" />
<Parameter Name="HttpApplicationGatewayProtocol" Value="http" />
...
</Section>
Interesting! With the dev cluster fired up, I browsed to:
http://localhost:19081/MyApp/MyService/api/odata/v1/$metadata
and voila! My API returned the expected data. So #Senj was correct that it has to do with the HttpApplicationGateway settings. I am guessing that in the latest SDK version it is pre-configured and enabled by default. (What threw me off is all the docs refer to port 19008, but the actual configured port was 19081!)
In order to get the reverse proxy to work on the 'real' multi-machine (VM) cluster, I did the following (Note: I don't think upgrading the cluster codepackage was necessary, but since I had nothing in my image store for the cluster upgrade, and the cluster upgrade process requires a code package, I used the latest version):
Copy the existing cluster manifest (from the Manifest tab in Service Fabric Explorer), paste into a new XML file, bump the version number and modify as follows:
To the NodeType Endpoints section, add:
<NodeTypes>
<NodeType Name="NodeType0">
<Endpoints>
<HttpApplicationGatewayEndpoint Port="19081" Protocol="http" />
...
</Endpoints>
</NodeType>
</NodeTypes>
and under <FabricSettings>, add the following section:
<Section Name="ApplicationGateway/Http">
<Parameter Name="IsEnabled" Value="true" />
</Section>
Using Service Fabric PowerShell commands:
Copy the new cluster config (the previously copied manifest.xml) to the fabric image store
Register the new cluster config
Copy the Service Fabric Runtime cluster codepackage (available here - see the release notes for the link to the MSI) to the image store
Register the cluster codepackage
Start and complete cluster upgrade (I used unmonitored manual mode, which does one VM at a time and requires a manual Resume command after each node is complete)
After the cluster upgrade was complete, I was able to query my service API using the reverse proxy endpoint and appname/servicename URL syntax:
http://fqdn:19081/MyApp/MyService/api/odata/v1/$metadata
I enabled this in the standalone installer version (5.1.156) by adding the following line to the JSON configuration file under the nodeTypes element (I used ClusterConfig.Unsecure.MultiMachine.json but I assume any of the JSON files would work):
"httpApplicationGatewayEndpointPort": "19081"
So the final nodeTypes looked like this:
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpoint": "19001",
"httpGatewayEndpointPort": "19080",
"httpApplicationGatewayEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"ephemeralPorts": {
"startPort": "20032",
"endPort": "20062"
},
"isPrimary": true
}
]
I think it has something to do with the HttpApplicationGatewayEndpoint property, see also my question on https://github.com/Azure/service-fabric-issues/issues/5
But it doesn't work for me..
Also notice that
<Section Name="ApplicationGateway/Http">
<Parameter Name="IsEnabled" Value="true" />
</Section>
is true for me.
Edit:
I noticed that on my Windows-Only installation, HttpApplicationGatewayListenAddress has value 0 in the FabricHostSettings.xml
<Parameter Name="HttpGatewayListenAddress" Value="19080" />
<Parameter Name="HttpGatewayProtocol" Value="http" />
<Parameter Name="HttpApplicationGatewayListenAddress" Value="0" />
<Parameter Name="HttpApplicationGatewayProtocol" Value="" />
How can I create two server processes running simultaneously on one pc using filezilla ?
Make a duplicate of the first instance.
In the directory of the second instance, execute
"FileZilla Server" /servicename <...>
"FileZilla Server" /servicedisplayname <...>
"FileZilla Server" /install
You may not mix any of the commands, must execute one after another. Enter a unique identifier for both servicename and servicedisplayname that's different from the first instance.
Alternatively, you can insert/update the following lines into FileZilla Server.xml and then run with /install:
Code:
<Item name="Service name" type="string">newname</Item>
<Item name="Service display name" type="string">newdisplayname</Item>
Check your second instance in the Services Control Panel. Make sure both instances have a different listening socket.
From https://forum.filezilla-project.org/viewtopic.php?f=6&t=16015