How do I save the web service response to the same excel sheet I extracted the data from? - web-services

For example:
The given sample HP Flights SampleAppData.xls and using the CreateFlightOrder, we can link the data to the test functions and get a OrderNumber and Price response from the Web Service. And in the SampleAppData.xls Input tab, we can see that there is a empty column of OrderNumber.
So here is my question, is there any ways that I can take the OrderNumber response and fill the empty column in SampleAppData.xls?
My point to do this is because, let's say I have many test cases to do and will take days, and today I do this certain test and I would need the result of today for the next day's test.
Although I know that the responses are saved in the result but it beats the point of automation if I am required to check the response for each and every test cases?

Yes of course you can. There are a number of ways to do this. The simplest is as follows.
'Datatable.Value("columnName","sheetName")="Value"
DataTable.Value(“Result”,”Action1”)=“Pass”
Once you have recorded the results in the Datasheet, your can export them using
DataTable.ExportSheet("C:\SavePath\Results.xls")

You can write back the response programatically , if you already imported mannually .
You can use GetDataSource Class of UFT API , it will work like this lets say you imported excel from FlightSampleData.xls, and named it as FlightSampleData, you have sheet, accessing the sheet will be like below:
GetDataSource("FlightSampleData!input).Set(ROW,ColumnName,yourValue);
GetDataSource("FlightSampleData!input).Get(ROW,ColumnName);
for exporting you can use ExportToExcelFile method of GetDataSourse class after your test run . Please let me know if you have any furthur question about this.

Related

Headers is treated as Data in spreadsheet query through JS

I am trying to read data from Spreadsheet using google visualization query. i am sending axios request to my spreadsheet. after getting response the output shows 1st column as data itself instead of headers. I am sending axios request to url.
** let url = https://docs.google.com/spreadsheets/d/${id}/gviz/tq?tqx=out:csv&tq=${encodedQuery}&gid=${gid} **
What do I need to change to get all the headers as it should be or all as the data also works fine. is there something wrong with the spreadsheet. I have deleted my sheet and recreated but the problem persists
After going through 7 hrs of mental blowing up I have solution for the question for i have posted. The reson behind the error was simply bad (empty) assignments in the sheets. Generally if the first item in the sheet is empty then this becomes bad assignment in array and the query fails to detect the comman type of data. So it takes the headers as data. To avoid this always populate the first set of data in the sheet to make sure the query can detect the type of data and treat headers as headers and not data.

Dynamic query (Current date) via web services in Power Bi

In my project we are consuming the company's data via Web Service REST. Today we don't do the query dynamically by passing the start date and end date parameters via string.
enter image description here
My goal is for the end date to update dynamically. I've already created a query that takes the current date but I can't put it in the parameter without generating an error in the query.
enter image description here
This is the error message I get when I put the column value in the parameter:
enter image description here
I'm pretty sure I'm getting the syntax wrong. Anyone who can help me, I really appreciate it. I would like to point out that the date format for the API call to work is DD/MM/YYYY.
Can you try using
PutYourOtherTableNameHere[Hoje_Coluna]{0}
instead of
[Hoje_Coluna]
?
To see if that will work, put this in right before your query, then click on the step and see what it returns.
x = PutYourOtherTableNameHere[Hoje_Coluna]{0},

Kibana: can I store "Time" as a variable and run a consecutive search?

I want to automate a few search in one, here are the steps:
Search in Kibana for this ID:"b2c729b5-6440-4829-8562-abd81991e2a0" which will return me a bunch of logs. Of these logs I need to take the first and the last timestamp:
I now would like to store these two data FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524 in 2 variables
Run a second search in Kibana for the word "fail" in between these two variable of time
How to automate the whole process without need of copy/paste and running a second query?
EDIT:
SHORT STORY LONG: I work in a company that produce a software for autonomous vehicles.
SCENARIO: A booking is rejected and we need to understand why.
WHERE IS THE PROBLE: I need to monitor just a few seconds of logs on 3 different machines. Each log is completely separated, there is no relation between the logs so I cannot write a query in discover, I need to run 3 separated queries.
EXAMPLE:
A booking was rejected, so I open Chrome and I search on "elk-prod.myhost.com" for the BookingID:"b2c729b5-6440-4829-8562-abd81991e2a0" and I have a dozen of logs returned during a range of 2 seconds (FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524).
Now I need to know what was happening on the car so I open a new Chrome tab and I search on "elk-prod.myhost.com" for the CarID: "Tesla-45-OU" on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
Now I need to know why the server which calculate the matching rejected the booking so I open a new Chrome tab and I search for the word CalculationMatrix always on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
CONCLUSION: I want to stop to keep opening Chrome tabs by hand and automate the whole thing. I have no idea around what time the book was made so I first need to search for the BookingID "b2c729b5-6440-4829-8562-abd81991e2a0", then store the timestamp of first and last log and run a second and third query based on those timestamps.
There is no relation between the 3 logs I search so there is no way to filter from the Discover, I need to automate 3 different query.
Here is how I would do it. First of all, from what I understand, you have three different indexes:
one for "bookings"
one for "cars"
one for "matchings"
First, in Discover, I would create three Saved Searches, one per index pattern. Then in Visualize, I would create a Vertical bar chart on the bookings saved search (Bucket X-Axis by date_histogram on the timestamp field, leave the rest as is). You'll get a nice histogram of all your booking events bucketed by time.
Finally, I would create a dashboard and add the vertical bar chart + those three saved searches inside it.
When done, the way I would search according to the process you've described above is as follows:
Search for the booking ID b2c729b5-6440-4829-8562-abd81991e2a0 in the top filter bar. In the bar chart histogram (bookings), you will see all documents related to the selected booking. On that chart, you can select the exact period from when the very first booking document happened to the very last. This will adapt the main time picker at the top and the start/end time will be "remembered" by Kibana
Remove the booking ID from the top filter (since we now know the time range and Kibana stores it). Search for Tesla-45-OU in the top filter bar. The bar histogram + the booking saved search + the matchings saved search will be empty, but you'll have data inside the second list, the one for cars. Find whatever you need to find in there and go to the next step.
Remove the car ID from the top filter and search for ComputationMatrix. Now the third saved search is going to show you whatever documents you need to see within that time range.
I'm lacking realistic data to try this out, but I definitely think this is possible as I've laid out above, probably with some adaptations.
Kibana does work like this (any order is ok):
Select time filter: https://www.elastic.co/guide/en/kibana/current/set-time-filter.html
Add additional criteria for search like for example field s is b2c729b5-6440-4829-8562-abd81991e2a0.
Add aditional criteria for search like for example field x is Fail.
Additionaly you can view surrounding documents https://www.elastic.co/guide/en/kibana/current/document-context.html#document-context
This is how Kibana works.
You can prepare some filters beforehands, save them and then use them if you want to automate the process of discovering somehow.
You can do that in Discover tab in Kibana using New/Save/Open options.
Edit:
I do not think you can achieve what you need in Kibana. As I mentioned earlier one option is to change the data that is comming to Elasticsearch so you can search for it via discover in Kibana. Another option could be builiding for example Java application, that is using Elasticsearch - then you can write algorithm that returns the data that you want. But i think it's a big overhead and I recommend checking the data first.
Edit: To clarify - you can create external Java let's say SpringBoot application that uses Elasticsearch - all the data that you need is inside it.
But in this option you will not use Kibana at all.
You can export the result to csv or what you want in the code.
SpringBoot application can ask ElasticSearch for whatever it needs, then it would be easy to store these time variables inside of Java code.
EDIT: After OP edited question to change it dramatically:
#FrancescoMantovani Well the edited version is very different from where you first posted here How to automate the whole process without need of copy/paste and running a second query? and search for word fail in a single shot. In accepted answer you are still using a three filters one at a time so it is not one search, but three.
What's more if you would use one index, and send data from multiple hosts via filebeat you don't even to have to create this dashboard to do that. Then you can you can select the exact period from when the very first document happened to the very last regarding filter and then remove it and add another filter that you need - it's simple as that. Before you were writing about one query,
How to automate the whole process without need of copy/paste and
running a second query?
not three. And you don't need to open new tab in Chrome each time you want to change filter just organize the data by for example using filebeat as mentioned before.
There is no relation between the 3 logs
From what you wrote the realation exist and it is time.
If the data is in for example three diferent indicies (cause documents don't have much similiar data) you can do it like that:
You change them easily in dicover see:
You can go to discover select index 1 search, select time range that you need, when you change index the time range is still the one you selected, you only need to change filter - you will get what you need.

Is there a way to pass data source connection string as a parameter to power bi embedded?

I have a pbix file that takes an Azure Storage account as a parameter and reads data from there accordingly. The next step is to be able to embed this powerbi dashboard on a webpage and let the end user specify the storage account. I see a lot of questions and answers surrounding passing in filter query parameters--this is different, we're trying to read from a completely different data source and not filtering on a static data source.
Another way to ask this question is: is there a way to embed powerbi template files, if not, is there a feature request somewhere we can upvote?
The short answer is no.
There is a reason to use filters in this case instead of parameters. Parameters are something that is part of the report itself. Each users that looks at your reports will get the same parameter values as the others. If one of them changes some parameter, this will affect all other users. Filters on the other hand, is something local for your session. You can filter the report the way you like, and this will not affect other users experience in any way.
You can't embed templates, because template is simply a state of the report on the disk. When you open it, it's not a template anymore, but becomes a report.
You can either combine the data from all of your data sources in a single report, adding one more column to indicate from where this data comes from, and then filter on this new column. Or create/modify ETL process (for example dataflows can be used for this) to combine these data sources into a single one.

Django Spreadsheet Application

I am in the making of a Django application where you will be able to upload an excel spreadsheet file and have it inserted into the application. But I sorta got the importing sorted out.
What I need is a way to store the values dynamically, I basicly need X number of fields for each row, with each three possible types.
These would be:
Boolean
String
Number
How would I go about doing so?
Let's say I have some models that contains this information:
A spreadsheet with a name, and some "header"-cells that will indidate which fields (and their name) that belong to that spreadsheet (but dynamically expanding).
A row that can have multiple cells, each with a type of either a boolean, a string or a number.
An example could be like this:
Spreadsheet 100
name (string)
city (string)
religious? (boolean)
phonenumber (number)
and then I need to pair the cells underneath with the appropiate header, like this:
row
name = "Bob Curious"
city = "New York"
religious = "Yes"
phonenumber = "888 888 888"
I hope that explains it good enough, if not, please go ahead and ask for anything you might like explained.
Thanks in advance! :)
This post is pretty old so I'm not sure whether you still need help with your issue, but I've found xlrd to be an excellent tool for scraping spreadsheet data. I would suggest investigating this package further.
Maybe others would like to hear more about the solution for this question. My plugin django-excel would help with excel data import into and export from one or more django models. What's more, the plugin provides one programming interface to handle data in ods(using odfpy or ezodf), xls(using xlrd), xlsx(using openpyxl) and csv formats. I hope it may help you.