Please help me out how to run the single script in multiple devices in parallel using Python..?
I have started two different Appium servers using Selenium Grid.. But I'm not able to write the code to start the different drivers in two devices and run script parallel using Python..
Its better if you prepared a separate file for values which you are going to use it and separate code file where you can mention your test cases and the keywords.
Here is an example for values files :
Device:
SamsungA7:
Device_name: 111354d3 #Device id
server: http://localhost:4723/wd/hub #appium server URL
appPackage: com.android.contacts #app package name of your application
appActivity: com.android.contacts.activities.PeopleActivity #app activity of your application
platfrom: 6.0 #platfom version of your device
automation: Appium #Appium is used in automationName instead of Uiautomator for deviced on android version 4.4
Here is an example for the Code file:
* Settings *
Test Setup Sum of two numbers a+b
Test Teardown Set the Default Values
Suite setup Set value
* Variables *
Default Values
Value of A 1
Value of B 1
* Test Cases *
[Setup] Sum of first two numbers should be 6
Enter first value 5
Enter second value 1
5+1
* Test Cases *
[Setup] Sum of Second two numbers should be 11
Enter sum of first value 6
Enter second value 5
6+5
* Keywords *
Test TearDown
Set the Default values
Note: Code file should be in the .robot format and you can save the script file in yaml or notepad or json.
Related
I have a huge port-in request at Twilio and need to assign 200+ numbers to our SIP trunk. The following (redacted) simple python script works for one number. Is there a simple way I can pipe in a .txt file with the 200 +1XXXXXXXXXX numbers and update the SIP trunk for each?
account_sid = 'ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
auth_token = 'your_auth_token'
client = Client(account_sid, auth_token)
incoming_phone_number = client \
.incoming_phone_numbers('PNXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX') \
.update(
trunk_sid='TKXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
)
print(incoming_phone_number.friendly_name)
If you assign the numbers under the Elastic SIP Trunk, you can assign many of them at once. See image below.
Enabling E911 on Multiple Numbers
I have a file which contains the below logs. Now I want to find out where my TD_Map string is. I can achieve that through str.find() method. But is there a way to get the complete string as a return value? Like for this example I must get TD_Map2.
I have to search TD_Map string only because the rest part can be any integer constant.
Running
OK, start the cluster assignment.
: 6035
Warning: The Default Cluster size results in a cluster with multiple Amps on the same Clique.
The cluster assignment is complete.
Reading
da 6-7
Running
OK.
The AMPs 6 to 7 are deleted.>
Reading
ec
Running
OK.
The session terminated. TD_Map2 is saved.>
There are so many ways to do this, but it seems like a nice use case for regex:
import re
s = """Running
OK, start the cluster assignment.
: 6035
Warning: The Default Cluster size results in a cluster with multiple Amps on the same Clique.
The cluster assignment is complete.
Reading
da 6-7
Running
OK.
The AMPs 6 to 7 are deleted.>
Reading
ec
Running
OK.
The session terminated. TD_Map22 is saved.>
"""
match = re.search('TD_Map([\d]+)', s)
print(match.group(0)) # Print whole "TD_Map2"
print(match.group(1)) # Print only the number "2"
Output:
TD_Map2
2
In the logs it says that 2 test workers were used, is there a way to configure max to be 1?
Run Settings
...
NumberOfTestWorkers: 2
Using a manual script like below works but gets messy when the solution contains many assemblies.
test_script:
- nunit3-console.exe Gu.Persist.Core.Tests\bin\Release\Gu.Persist.Core.Tests.dll --result=myresults.xml;format=AppVeyor --workers=1
- ...
AppVeyor generates nunit3-console command line without any --workers switch. I believe that number of workers is decided by nunit console itself. As I understand if you remove Parallelizable Attribute from your tests, it will be only one worker.
I need guidence on implementing WHILE loop with Kettle/PDI. The scenario is
(1) I have some (may be thousand or thousands of thousand) data in a table, to be validated with a remote server.
(2) Read them and loopup to the remote server; I use Modified Java Script for this as remote server lookup validation is defined in external Java JAR file (I can use "Change number of copies to start... option on Modified java script and set to 5 or 10)
(3) Update the result on database table. There will be 50 to 60% connection failure cases each session.
(4) Repeat Step 1 to step 3 till all gets updated to success
(5) Stop looping on Nth cycle; this is to avoid very long or infinite looping, N value may be 5 or 10.
How to design such a WhILE loop in Pentaho Kettle?
Have you seen this link? It gives a pretty well detailed explanation of how to implement a while loop.
You need a parent job with a sub-transformation for doing a check on the condition which will return a variable to the job on whether to abort or to continue.
I have a program that does some math in an SQL query. There are hundreds of thousands rows (some device measurements) in an SQLite table, and using this query, the application breaks these measurements into groups of, for example, 10000 records, and calculates the average for each group. Then it returns the average value for each of these groups.
The query looks like this:
SELECT strftime('%s',Min(Stamp)) AS DateTimeStamp,
AVG(P) AS MeasuredValue,
((100 * (strftime('%s', [Stamp]) - 1334580095)) /
(1336504574 - 1334580095)) AS SubIntervalNumber
FROM LogValues
WHERE ((DeviceID=1) AND (Stamp >= datetime(1334580095, 'unixepoch')) AND
(Stamp <= datetime(1336504574, 'unixepoch')))
GROUP BY ((100 * (strftime('%s', [Stamp]) - 1334580095)) /
(1336504574 - 1334580095)) ORDER BY MIN(Stamp)
The numbers in this request are substituted by my application with some values.
I don't know if i can optimize this request more (if anyone could help me to do so, i'd really appreciate)..
This SQL query can be executed using an SQLite command line shell (sqlite3.exe). On my Intel Core i5 machine it takes 4 seconds to complete (there are 100000 records in the database that are being processed).
Now, if i write a C program, using sqlite.h C interface, I am waiting for 14 seconds for exactly the same query to complete. This C program "waits" during these 14 seconds on the first sqlite3_step() function call (any following sqlite3_step() calls are executed immediately).
From the Sqlite download page I have downloaded SQLite command line shell's source code and build it using Visual Studio 2008. I ran it and executed the query. Again 14 seconds.
So why does a prebuilt, downloaded from the sqlite website, command line tool takes only 4 seconds, while the same tool, built by me, takes 4 times longer time to execute?
I am running Windows 64 bit. The prebuilt tool is an x86 process. It also does not seem to be multicore optimized - in a Task Manager, during query execution, I can see only one core busy, for both built-by-mine and prebuilt SQLite shells.
Any way I could make my C program execute this query as fast as the prebuilt command line tool does it?
Thanks!