MultiQ in DPDK is configure "max_rx_queue=1" issue - dpdk

i need help regarding enabling multiQ (multiple queue ) for port in DPDK testpmd , when i use in command like --rxq=2 or --txq=N , it shows max_rx_queue =1 and i am not able to configure for more queue . please help me how can i configure multiple queue .

one can modify kvn-qemu either in cmdline or XML to reflect the desired number of virtio queues.
XML:
<interface type='vhostuser'> <mac address='00:00:00:00:00:01'/> <source type='unix' path='/usr/local/var/run/openvswitch/dpdkvhostuser0' mode='client'/> <model type='virtio'/> <driver queues='2'> <host mrg_rxbuf='off'/> </driver> </interface>
P.S.: based on the update from the comment the VM environment is hosted over Virtualbox. Request for an update.

Related

Specifying a JDBC Database Driver Max Threads

I have a ColdFusion 10 server. I am using a JDBC driver to connect to a db2 database. I came across this note. Where is this setting? I also looked at neo*.xml files, but I did not see any db driver thread setting. I am not sure if this is specific to ColdFusion 2016 either. I also looked for it in ColdFusion 2018 administrator with no luck.
ColdFusion Server takes the SQL content of the cfquery tag and passes
it to the specified driver for the data source. The driver request is
handled by a thread. By default, the ColdFusion Administrator is
configured to limit the amount of active threads to 5.
https://helpx.adobe.com/coldfusion/kb/database-connections-handled-coldfusion.html
ColdFusion administrator limits the max connections to 5 and it will be affected only if you disable the "maintain connections".
If you want to increase the default active connections to 100, please enable "maintain connections" and check the "limit connections" to some number.
ColdFusion, by default, will not set the default max connection property on neo-datasource.xml.
Please find the below XML configurations for the 2 configurations.
<var name="myDataSource">
....
<var name="pooling">
<boolean value="true" />
</var>
....
<struct type="coldfusion.server.ConfigMap">
<var name="MAXCONNECTIONS">
<string>100</string>
</var>
</struct>
....
</var>

Is there a way to run test cases in parallel in different ec2 instances(eg. Run set of 90 cases across three AWS machines)

Using testng factory and Data provider annotations we have set of test cases that needs to be executed in parallel using selenium grid. As of now we have say for example three AWS instances with the IPs required. For now, we are able to run set of cases in parallel in single AWS instance. i.e able to run set of 30 cases in parallel in single instance.
<?xml version="1.0"?>
<suite name="reg_tests" parallel="tests" thread-count="90">
<test name="sanity_01" parallel="instances" thread-count="30">
<classes>
<class name="com.X.Y"/>
</classes>
</test>
<test name="sanity_02" parallel="instances" thread-count="30">
<classes>
<class name="com.X.Y1"/>
</classes>
</test>
<test name="sanity_03" parallel="instances" thread-count="30">
<classes>
<class name="com.X.Y2"/>
</classes>
</test>
</suite>
Have properties file where we get the IP of the machine where we want to run which is obviously pointing to single AWS machine.
WebDriver driver = new RemoteWebDriver(new URL(url),
desiredCapabilities);
url - IP of the AWS machine.
So, the above code directs to run in single machine. So, now is there a way to ask the selenium grid to run in all three Grid machines which are already set up for executing test cases. Since the thread maintenance are managed internally, can this be done?
Yes of course. But it is depend on the hub.
The node must able to register to the hub successfully.
Note that your selenium code target to hub only not to node and hub then decide to whom they need to redirect as per the capability set by you.
For an example if your sanity_01 having capabilities of chrome and when you target to the hub, the hub understood the capablity and redirect your code to node machine/ec2 which is register for chrome
baseURL = "http://demo.xyz.com/test/";
hubURL = "http://192.168.43.223:4444/wd/hub";
DesiredCapabilities capability = DesiredCapabilities.chrome();
capability.setBrowserName("chrome");
capability.setPlatform(Platform.WIN10);
driver = new RemoteWebDriver(new URL(hubURL), capability);
In above code the hub is hubURL = "http://192.168.43.223:4444/wd/hub"; and as capability is set to chrome, it will send it to chrome node.
if 2 chrome node added to hub then it will redirect to anyone as per the node availability

How to configure and enable Azure Service Fabric Reverse Proxy for an existing on-premises cluster?

Is the Azure Service Fabric Reverse Proxy available in an on-premises cluster? If so, how can I enable it for an existing cluster?
The Service Fabric Reverse Proxy is described here. It allows clients external to the cluster to access application services by name with a special URL, without needing to know the exact host:port on which an instance of the service is running (which may change as services are automatically moved around).
By default the Service Fabric Reverse Proxy does not appear to be enabled for my on-prem cluster with two instances of a stateless service. I tried using the documented port 19008 but could not reach the service using the recommended URI syntax.
To wit, this works:
http://fqdn:20001/api/odata/v1/$metadata
but this does not:
http://fqdn:19008/MyApp/MyService/api/odata/v1/$metadata
In the NodeTypes section of the ClusterConfig JSON used to set up my on-prem cluster, there is a property "httpGatewayEndpointPort": "19080", but that port does not appear to work as a reverse proxy (it is the Service Fabric Explorer web-app endpoint). I am guessing that the needed configuration is specified somehow in the cluster config JSON. There are instructions in the referenced article that explain how to configure the reverse proxy in the cloud, but not on-premises.
What I am looking for are instructions on how to set up the Service Fabric reverse proxy in an on-premises multi-machine cluster or dev cluster.
Yes, the reverse proxy is available on-premises.
To get it working for an existing cluster, it must be configured and enabled in the cluster config XML and then the new config must be deployed, as described below.
For a new cluster, set it up in the cluster config JSON before creating the cluster, as described by #Scott Weldon.
#Senj provided the clue (thanks!) that led me to the answer. I had recently updated my Service Fabric bits on my dev box to 5.1.163.9590. When I looked in C:\SfDevCluster\Data\FabricHostSettings.xml, I noticed the following:
<Section Name="FabricNode">
...
<Parameter Name="NodeVersion" Value="5.1.163.9590:1.0:0" />
...
<Parameter Name="HttpApplicationGatewayListenAddress" Value="19081" />
<Parameter Name="HttpApplicationGatewayProtocol" Value="http" />
...
</Section>
Interesting! With the dev cluster fired up, I browsed to:
http://localhost:19081/MyApp/MyService/api/odata/v1/$metadata
and voila! My API returned the expected data. So #Senj was correct that it has to do with the HttpApplicationGateway settings. I am guessing that in the latest SDK version it is pre-configured and enabled by default. (What threw me off is all the docs refer to port 19008, but the actual configured port was 19081!)
In order to get the reverse proxy to work on the 'real' multi-machine (VM) cluster, I did the following (Note: I don't think upgrading the cluster codepackage was necessary, but since I had nothing in my image store for the cluster upgrade, and the cluster upgrade process requires a code package, I used the latest version):
Copy the existing cluster manifest (from the Manifest tab in Service Fabric Explorer), paste into a new XML file, bump the version number and modify as follows:
To the NodeType Endpoints section, add:
<NodeTypes>
<NodeType Name="NodeType0">
<Endpoints>
<HttpApplicationGatewayEndpoint Port="19081" Protocol="http" />
...
</Endpoints>
</NodeType>
</NodeTypes>
and under <FabricSettings>, add the following section:
<Section Name="ApplicationGateway/Http">
<Parameter Name="IsEnabled" Value="true" />
</Section>
Using Service Fabric PowerShell commands:
Copy the new cluster config (the previously copied manifest.xml) to the fabric image store
Register the new cluster config
Copy the Service Fabric Runtime cluster codepackage (available here - see the release notes for the link to the MSI) to the image store
Register the cluster codepackage
Start and complete cluster upgrade (I used unmonitored manual mode, which does one VM at a time and requires a manual Resume command after each node is complete)
After the cluster upgrade was complete, I was able to query my service API using the reverse proxy endpoint and appname/servicename URL syntax:
http://fqdn:19081/MyApp/MyService/api/odata/v1/$metadata
I enabled this in the standalone installer version (5.1.156) by adding the following line to the JSON configuration file under the nodeTypes element (I used ClusterConfig.Unsecure.MultiMachine.json but I assume any of the JSON files would work):
"httpApplicationGatewayEndpointPort": "19081"
So the final nodeTypes looked like this:
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpoint": "19001",
"httpGatewayEndpointPort": "19080",
"httpApplicationGatewayEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"ephemeralPorts": {
"startPort": "20032",
"endPort": "20062"
},
"isPrimary": true
}
]
I think it has something to do with the HttpApplicationGatewayEndpoint property, see also my question on https://github.com/Azure/service-fabric-issues/issues/5
But it doesn't work for me..
Also notice that
<Section Name="ApplicationGateway/Http">
<Parameter Name="IsEnabled" Value="true" />
</Section>
is true for me.
Edit:
I noticed that on my Windows-Only installation, HttpApplicationGatewayListenAddress has value 0 in the FabricHostSettings.xml
<Parameter Name="HttpGatewayListenAddress" Value="19080" />
<Parameter Name="HttpGatewayProtocol" Value="http" />
<Parameter Name="HttpApplicationGatewayListenAddress" Value="0" />
<Parameter Name="HttpApplicationGatewayProtocol" Value="" />

Setting up an Appfabric Cluster - need some clarification

There's something that's not very clear to me and does not show up in the documentation:
After successfully setting up a Cache Cluster with 2 hosts, what is the address for connecting to this cluster? I know this screams setting up a Windows Cluster all over but I just wanted to make sure there wasnt anything I'm missing.
Not sure I understood your question. The default port to connect to on of your cluster nodes is 22233. Here is an example. You can build up the connection within the code or by putting the cache host configuration into your Web.config file.
There is no 'address' as such, as you might expect in a load-balancing scenario, but you specify host(s) in the hosts section of the dataCacheClient config element:
<dataCacheClient … >
<hosts>
<host name="server1.mydomain.local" cachePort="22233" />
<host name="server2.mydomain.local" cachePort="22233" />
</hosts>
</dataCacheClient>

How to escape/ translate Encoding/chars via Regex and XSLT?

I need some help on how to actually fix these encoding being outputted as special characters in an xml.I'm trying to execute a powershell script to create a certificate using a platform via an SSH connection on a server and the expected response from it is a clean xml form but unfortunately this special character issue is happening.I might need to do an xslt and regex as a workaround to fix the xml:
<output>
<line index="1">(B)0[?7l[H[J[3;1H Directory: D:\Certificatesdev[6;1HMode </line>
<line index="2">[24;1H [24;1H-----------------------------------------------------------------------------[24;1H</line>
<line index="3">[24;1H [24;1HSign Certificate: Successful[24;1H</line>
<line index="4">[24;1H [24;1H-----------------------------------------------------------------------------[24;1H</line>
<line index="5">[24;1H [24;1HEvent Summary:[24;1H</line>
<line index="6">[24;1H [24;1H*Generate Sign Certificate File: Passed[24;1H</line>
<line index="7">[24;1H [24;1H</line>
<line index="8">[24;1H [23;1HGenerated D:\Certificatesdev\Folder\test.com-20150511-210523.[24;1Hcer[24;1H</line>
<line index="9">[24;1H [24;1H*Get Actual Certificate Expiry Date: Passed[24;1H</line>
<line index="10">[24;1H [24;1H*Copy to Shared Folder: Passed[24;1H</line>
<line index="11">[24;1H [24;1H</line>
<line index="12">[24;1H [23;1HFile Location: D:\Certificatesdev\Folder\test.com-20150511-21[24;1H0523[24;1H</line>
<line index="13">[24;1H [24;1H*Add to Certificate DB: Successful[24;1H</line>
<line index="14">[24;1H [24;1HCertificate Automation Completed Successfully[24;1H</line>
<line index="15">[24;1H [24;1H</line>
<line index="16">[24;1H [24;1H</line>
<line index="17">[24;1H [24;1H[?7h</line>
</output>
Note that the extra spaces are not expected as well. I am not quite sure what kind of encoding is this.
Please Help. Thank you
I'm denis bider from Bitvise. My attention is called to this because I understand the issue you're experiencing happens when connecting to a version of WinSSHD or Bitvise SSH Server.
If I understand correctly, you're connecting to the SSH Server in order to receive from it an XML type of file. It looks like the way you are connecting to the SSH Server and retrieving data uses either an exec request, or a terminal shell session.
The issue appears to be that when you receive data on the client side, you are expecting a plain stream of data, but what you're receiving instead are terminal escape sequences. This appears to be because your SSH client, when connecting to the server, requests terminal emulation.
The best way to resolve this issue would be either to:
change SSH client parameters so that it does NOT request a terminal (such as vt100 or xterm) when connecting to the server, or that it requests the "dumb" terminal type instead;
or alternately, if the SSH client is of a type that does not allow changing the terminal being requested, the SSH Server supports a setting that allows you to force it to use the "dumb" terminal type (no escape sequences), no matter what the client requests.
If you are looking to change this via the server-side setting, you can find it in SSH Server settings, either in the account or group settings entry that contains settings for this user. The name of the setting is "Always use 'dumb' pseudo terminal".