Trying to add a new real time shipping system to an existing, older (4.2.x) version of X-Cart and I can not figure out how to implement it properly. Plan is to put the lookup into a new shipping/mod_*.php file and from what I can tell merge $intershipper_rates with the response I get from the rating API. I just don't know how to reliably integrate it nor if I need to manually add anything into the database to make it work properly. There doesn't seem to be any reference material or documentation for the older version I can easily access to figure it out, either. If anybody can give me a hand wrapping my head around this, I'd appreciate it.
In the code below replace the 'CPC' substring with your new shipper code.
1) Create functions like
func_shipper_CPC
func_get_package_limits_CPC
func_check_limits_CPC
in a new file like
shipping/mod_CPC.php
2) Change the array
$mods = array("USPS", "CPC", "ARB", "FEDEX");
in the shipping/myshipper.php
3) Add a row to the shipping options table
$params = func_query_first ("SELECT * FROM $sql_tbl[shipping_options] WHERE carrier='CPC'");
4) Add possible shipping methods in the xcart_shipping table
INSERT INTO xcart_shipping VALUES (null,'Canada Post Expedited','','L','CPC','81',20,'Y','CEX',0.00,0.00,1020,'','');
INSERT INTO xcart_shipping VALUES (null,'Canada Post Regular','','L','CPC','82',10,'Y','CRE',0.00,0.00,1010,'','');
INSERT INTO xcart_shipping VALUES (null,'Canada Post Xpresspost USA','','I','CPC','89',90,'Y','',0.00,0.00,2030,'','');
.....
Related
I want to automate a few search in one, here are the steps:
Search in Kibana for this ID:"b2c729b5-6440-4829-8562-abd81991e2a0" which will return me a bunch of logs. Of these logs I need to take the first and the last timestamp:
I now would like to store these two data FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524 in 2 variables
Run a second search in Kibana for the word "fail" in between these two variable of time
How to automate the whole process without need of copy/paste and running a second query?
EDIT:
SHORT STORY LONG: I work in a company that produce a software for autonomous vehicles.
SCENARIO: A booking is rejected and we need to understand why.
WHERE IS THE PROBLE: I need to monitor just a few seconds of logs on 3 different machines. Each log is completely separated, there is no relation between the logs so I cannot write a query in discover, I need to run 3 separated queries.
EXAMPLE:
A booking was rejected, so I open Chrome and I search on "elk-prod.myhost.com" for the BookingID:"b2c729b5-6440-4829-8562-abd81991e2a0" and I have a dozen of logs returned during a range of 2 seconds (FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524).
Now I need to know what was happening on the car so I open a new Chrome tab and I search on "elk-prod.myhost.com" for the CarID: "Tesla-45-OU" on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
Now I need to know why the server which calculate the matching rejected the booking so I open a new Chrome tab and I search for the word CalculationMatrix always on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
CONCLUSION: I want to stop to keep opening Chrome tabs by hand and automate the whole thing. I have no idea around what time the book was made so I first need to search for the BookingID "b2c729b5-6440-4829-8562-abd81991e2a0", then store the timestamp of first and last log and run a second and third query based on those timestamps.
There is no relation between the 3 logs I search so there is no way to filter from the Discover, I need to automate 3 different query.
Here is how I would do it. First of all, from what I understand, you have three different indexes:
one for "bookings"
one for "cars"
one for "matchings"
First, in Discover, I would create three Saved Searches, one per index pattern. Then in Visualize, I would create a Vertical bar chart on the bookings saved search (Bucket X-Axis by date_histogram on the timestamp field, leave the rest as is). You'll get a nice histogram of all your booking events bucketed by time.
Finally, I would create a dashboard and add the vertical bar chart + those three saved searches inside it.
When done, the way I would search according to the process you've described above is as follows:
Search for the booking ID b2c729b5-6440-4829-8562-abd81991e2a0 in the top filter bar. In the bar chart histogram (bookings), you will see all documents related to the selected booking. On that chart, you can select the exact period from when the very first booking document happened to the very last. This will adapt the main time picker at the top and the start/end time will be "remembered" by Kibana
Remove the booking ID from the top filter (since we now know the time range and Kibana stores it). Search for Tesla-45-OU in the top filter bar. The bar histogram + the booking saved search + the matchings saved search will be empty, but you'll have data inside the second list, the one for cars. Find whatever you need to find in there and go to the next step.
Remove the car ID from the top filter and search for ComputationMatrix. Now the third saved search is going to show you whatever documents you need to see within that time range.
I'm lacking realistic data to try this out, but I definitely think this is possible as I've laid out above, probably with some adaptations.
Kibana does work like this (any order is ok):
Select time filter: https://www.elastic.co/guide/en/kibana/current/set-time-filter.html
Add additional criteria for search like for example field s is b2c729b5-6440-4829-8562-abd81991e2a0.
Add aditional criteria for search like for example field x is Fail.
Additionaly you can view surrounding documents https://www.elastic.co/guide/en/kibana/current/document-context.html#document-context
This is how Kibana works.
You can prepare some filters beforehands, save them and then use them if you want to automate the process of discovering somehow.
You can do that in Discover tab in Kibana using New/Save/Open options.
Edit:
I do not think you can achieve what you need in Kibana. As I mentioned earlier one option is to change the data that is comming to Elasticsearch so you can search for it via discover in Kibana. Another option could be builiding for example Java application, that is using Elasticsearch - then you can write algorithm that returns the data that you want. But i think it's a big overhead and I recommend checking the data first.
Edit: To clarify - you can create external Java let's say SpringBoot application that uses Elasticsearch - all the data that you need is inside it.
But in this option you will not use Kibana at all.
You can export the result to csv or what you want in the code.
SpringBoot application can ask ElasticSearch for whatever it needs, then it would be easy to store these time variables inside of Java code.
EDIT: After OP edited question to change it dramatically:
#FrancescoMantovani Well the edited version is very different from where you first posted here How to automate the whole process without need of copy/paste and running a second query? and search for word fail in a single shot. In accepted answer you are still using a three filters one at a time so it is not one search, but three.
What's more if you would use one index, and send data from multiple hosts via filebeat you don't even to have to create this dashboard to do that. Then you can you can select the exact period from when the very first document happened to the very last regarding filter and then remove it and add another filter that you need - it's simple as that. Before you were writing about one query,
How to automate the whole process without need of copy/paste and
running a second query?
not three. And you don't need to open new tab in Chrome each time you want to change filter just organize the data by for example using filebeat as mentioned before.
There is no relation between the 3 logs
From what you wrote the realation exist and it is time.
If the data is in for example three diferent indicies (cause documents don't have much similiar data) you can do it like that:
You change them easily in dicover see:
You can go to discover select index 1 search, select time range that you need, when you change index the time range is still the one you selected, you only need to change filter - you will get what you need.
Is there a way to query the ember data store to find the max id for a given model? I know I could find all, walk through the objects, store ids when they are greater than the previous, etc, etc.
I'm new to Ember, I'm used to being able to call aggregate methods on database, you know, max(), min(), sum() all that stuff. There HAS to be a way to do this in ember, right? I have searched and searched, I'm honestly a little mystified that I can't find anything for what has to be a very common use-case.
I'm currently using fixtures, want to add a new line to an order. When I create the new record I need to find the max id of all other records so I can increment it for the new record.
Ember (and web applications in general) don't communicate with a database directly.
It's easy enough to calculate the next id for your fixtures:
let ids = models.mapBy('id');
let nextId = Math.max(...ids) + 1;
Or you could use ember-cli-mirage in your tests which takes care of this for you.
I am inserting data into two tables, however I can not figure out (after hours of Googling) how to insert data into the second table after retrieving the new ID created after the first update?
I'm using <CFINSERT>.
use <CFQUERY result="result_name"> and the new ID will be available at result_name.generatedkey .. <cfinsert> and <cfupdate>, while easy and fast for simple jobs, they are pretty limited.
I have never used cfinsert myself, but this blog post from Ben Forta says you may not be able to use cfinsert if you need a generated key http://www.forta.com/blog/index.cfm/2006/10/3/Use-CFINSERT-And-CFUPDATE
Yes, I realize that blog post is old, but it doesn't appear much has changed.
Why not use a traditional INSERT statement wrapped in a <cfquery> tag?
This is what I am trying to do:
_tableView.data[0].rows[selectedPosY].children[selectedPosX].imageId = tempImageId;
_tableView.data[0].rows[selectedPosY].children[selectedPosX].image = tempImageUrl;
Titanium.API.info("imageIdSelected:" +
_tableView.data[0].rows[selectedPosY].children[selectedPosX].imageId + "imageSelected:" +
_tableView.data[0].rows[selectedPosY].children[selectedPosX].image);
The update is done on the data, but it doesn't reflect in UI table, what is missing?
I even tried doing the below as per How can I refresh my TableView in titanium? & How to resolve Titanium TableView display problem?, but it is not refreshing the UI table
_tableView.setData(_tableView.data); win.add(_tableView);
It turns out the only way to update/reload a Titanium.UI.TableView is to get a copy of the updated data (as per ones logic) and reset it in the TableView, using 'setData'. For my example, since the _tableView.data is getting updated (which could be seen through the logging statements), I could copy it using a javascript array copy function like;
var data2 =_tableView.data.slice(0);
_tableView.setData(data2);
Although, with the above knowledge, I had restructure my code, so, I am not using this exact code, but a similar logic. But, overall, this way of updating the table doesn't seem very appealing, if there is any better way of handling this, please post, it would help a lot.
Why are you doing _tableView.data[0].rows and then just _tableView.data when using setData?
Your _tableView.data should be an array of rows/sections. The Ti.UI.tableView object has the data property but the data should be structured as follows.
_tableView.data = [
{title:"Row 1"},
{title:"Row 2"}
];
What does the .rows accomplish for you when you are doing it that way, how are you using the .rows? I'm not positive but I think the _tableView.data is either no different or is invalid when trying to setData.
I have a simple database and want to update an int value. I initially do a query and get back a ResultSet (sql::ResultSet). For each of the entries in the result set I want to modify a value that is in one particular column of a table, then write it back out to the database/update that entry in that row.
It is not clear to me based on the documentation how to do that. I keep seeing "Insert" statements along with updates - but I don't think that is what I want - I want to keep most of the row of data intact - just update one column.
Can someone point me to some sample code or other clear reference/resource?
EDIT:
Alternatively, is there a way to tell the database to update a particular field (row/col) to increment an int value by some value?
EDIT:
So what is the typical way that people use MySQL from C++? Use the C api or the mysql++? I guess I chose the wrong API...
From a quick scan of the docs it appears Connector/C++ is a partial implementation of the Java JDBC API for C++. I didn't find any reference to updateable result sets so this might not be possible. In Java JDBC the ResultSet interface includes support for updating the current row if the statement was created with ResultSet.CONCUR_UPDATABLE concurrency.
You should investigate whether Connector/C++ supports updateable resultsets.
EDIT: To update a row you will need to use a PreparedStatement containing an SQL UPDATE, and then the statement's executeUpdate() method. With this approach you must identify the record to be update with a WHERE clause. For example
update users set userName='John Doe' where userID=?
Then you would create a PreparedStatement, set the parameter value, and then executeUpdate().