Logic for self-calling button - if-statement

I am a beginner and I am sorry in advance if I fail to make sense with my question.
I have a PLC that sends a signal that is then translated and stored in a database.
The table stores the following:
- InvokeID (auto increment/unique)
- Date (yyyy.MM.dd hh:mm:ss)
- Maintenance boolean
- Logistics boolean
Using the above information I need to come up an if statement that would allow me to call the maintenance/logistics button each time a new signal is sent in (record is added to the data table), but it should only do so once for each signal/record in the table.
The buttons are on a web-page that refreshes every few seconds, so I must ensure that the if statement is called once only for each record/signal.

I have solved the problem by simply comparing previous InvokeID to see if a new InvokeID exists. Sorry for wasting the time of those who have looked at the problem.

Related

How can I loop only the page records from the selected one to the latest?

I'm trying to loop all records displayed in a page, from the selected one to the end of the rows:
For example here, as I'm selecting only the 5th row it will loop through 5th and 6th row (as there are no more rows below)
What I've been trying is this:
ProdOrderLine := Rec;
REPEAT
UNTIL ProdOrderLine.NEXT = 0;
But it will loop through all records in the table which are not even displayed in the page...
How can I loop only the page records from the selected one to the latest?
Try Copy instead of assignment. Assignment only copies values of there field from one instance of record-variable to another, it died not copy filters or keys (sort order).
Alas, I have to mention that this is uncommon scenario to handle records like this in BC. General best practice approach would be to ask user to select all the records he or she needs with the shift+click, ctrl+click or by dragging the mouse. In that case you will use SetSelectionFiler to instantly grab ask the selected records.
This is how it works across the system and this how user should be taught to work. It is a bad idea to add a way to interact with record that only works in one page in the whole system even if users are asking for it bursting into tears. They probably just had this type of interaction in some other system they worked with before. I know this is a tough fight but it worth it. It is for the sake of stability (less coding = less bugs) and predictability (a certain way of interaction works across all the pages) of the system.

Optimistic concurrency - Isn't there a problem?

I've read the article.
The article describes the next solution to situations when many users can write to the same DB.
You as a user need to:
Retrieve the row and the last modified dateTime of the row.
Make the calculations you want, but don't write anything to the DB yet.
After the calculations, just before you want to write the result to the DB, retrieve the last modified dateTime of the same row again.
Compare the date time of #1 to the dateTime of #2.
If they equal - everything is ok, commit, and write the current time as the last modified date time of the row.
else - other user was here - Rollback.
This process seems logical, BUT I see the next hole in it:
In #3 the user retrieves the last modified dateTime of the row, but what if between the reading of this dateTime (in #3), and the time of writing in #4, an other user enters, writes its data and get out? The first user can never know about it, and it will override the second user's data.
Isn't it possible?
The algorithm you describe does indeed have an opportunity of missing concurrent updated between step #3 and #4.
The part about testing for optimistic concurrency violations says:
When an update is attempted, the timestamp value in the database is
compared to the original timestamp value contained in the modified
row. If they match, the update is performed and the timestamp column
is updated with the current time to reflect the update. If they do not
match, an optimistic concurrency violation has occurred.
Although not mentioned explicitly, the idea is for the compare and update step to happen atomically on the server. This can be done with an UPDATE statement containing a WHERE clause involving the timestamp and its original value. Similar to the example mentioned in the article where all the original column values in a row still match those found in the database.

Performance optimization on Django update or create

In a Django project, I'm refreshing tens of thousands of lines of data from an external API on a daily basis. The problem is that since I don't know if the data is new or just an update, I can't do a bulk_create operation.
Note: Some, or perhaps many, of the rows, do not actually change on a daily basis, but I don't which, or how many, ahead of time.
So for now I do:
for row in csv_data:
try:
MyModel.objects.update_or_create(id=row['id'], defaults={'field1': row['value1']....})
except:
print 'error!'
And it takes.... forever! One or two lines a second, max speed, sometimes several seconds per line. Each model I'm refreshing has one or more other models connected to it through a foreign key, so I can't just delete them all and reinsert every day. I can't wrap my head around this one -- how can I cut down significantly the number of database operations so the refresh doesn't take hours and hours.
Thanks for any help.
The problem is you are doing a database action on each data row you grabbed from the api. You can avoid doing that by understanding which of the rows are new (and do a bulk insert to all new rows), Which of the rows actually need update, and which didn't change.
To elaborate:
grab all the relevant rows from the database (meaning all the rows that can possibly be updated)
old_data = MyModel.objects.all() # if possible than do MyModel.objects.filter(...)
Grab all the api data you need to insert or update
api_data = [...]
for each row of data understand if its new and put it in array, or determine if the row needs to update the DB
for row in api_data:
if is_new_row(row, old_data):
new_rows_array.append(row)
else:
if is_data_modified(row, old_data):
...
# do the update
else:
continue
MyModel.objects.bulk_create(new_rows_array)
is_new_row - will understand if the row is new and add it to an array that will be bulk created
is_data_modified - will look for the row in the old data and understand if the data of that row is changed and will update only if its changed
If you look at the source code for update_or_create(), you'll see that it's hitting the database multiple times for each call (either a get() followed by a save(), or a get() followed by a create()). It does things this way to maximize internal consistency - for example, this ensures that your model's save() method is called in either case.
But you might well be able to do better, depending on your specific models and the nature of your data. For example, if you don't have a custom save() method, aren't relying on signals, and know that most of your incoming data maps to existing rows, you could instead try an update() followed by a bulk_create() if the row doesn't exist. Leaving aside related models, that would result in one query in most cases, and two queries at the most. Something like:
updated = MyModel.objects.filter(field1="stuff").update(field2="other")
if not updated:
MyModel.objects.bulk_create([MyModel(field1="stuff", field2="other")])
(Note that this simplified example has a race condition, see the Django source for how to deal with it.)
In the future there will probably be support for PostgreSQL's UPSERT functionality, but of course that won't help you now.
Finally, as mentioned in the comment above, the slowness might just be a function of your database structure and not anything Django-specific.
Just to add to the accepted answer. One way of recognizing whether the operation is an update or create is to ask the api owner to include a last updated timestamp with each row (if possible) and store it in your db for each row. That way you only have to check for those rows where this timestamp is different from the one in api.
I faced an exact issue where I was updating every existing row and creating new ones. It took a whole minute to update 8000 odd rows. With selective updates, I cut down my time to just 10-15 seconds depending on how many rows have actually changed.
I think below code can do the same thing together instead of update_or_create:
MyModel.objects.filter(...).update()
MyModel.objects.get_or_create()

Do the cursors used in pagination have a specific lifespan or timeout?

In the documentation on pagination there is a section about using cursors to check for new content. This implies that you can store the cursor and come back later to see if something new has appeared. Do the cursors timeout at some point or have a specific lifespan? If I get a cursor while paging through the comments on a post, will that cursor still be valid after an hour, a day, or even a week?
According to the documentaiton, it will be always valid.
http://developers.facebook.com/docs/reference/api/pagination/
"Cursor pagination is our preferred method of paging, and the list of Graph API endpoints which support it are growing. A cursor refers to a random string of characters which mark a specific point in a list of data. You can reliably assume that a cursor will always point to that same part of the list and use it to page through the data.
If you are using additional filters on the API endpoint when you receive a cursor, this cursor will only work for calls with those filters."
Update
As pointed by Scutterman, these cursors also have a lifespan. You should discard it after 1 day.
"Pagination markers used in these methods should not be used in long-lived applications as a way to jump back into a stream. For example, storing the cursors in a database and then re-using them a few days later may return stale, incorrect, or no data at all. Make sure that your cursors are relatively fresh - less than a day at most."
Update:
At some point the documentation added the information point:
Don't store cursors. Cursors can quickly become invalid if items are added or deleted.
There is a note in the documentation:
Cursors can't be stored for long periods of time and expected to still work. They are meant to paginate through the result set in a short period of time.

My test script is not finding items in dynamic web list control - list in code not updated with current info

I am having a problem in QTP with selection of a web list box and I have exhausted what I know to do to resolve it. I am hoping someone can help.
There are 5 controls in a container, 2 webedit controls and 3 weblist controls. Together, they allow entry of accounts associated with a customer, and there can be 16 accounts for any customer. There are only ever five controls active at any time, whether editing or entering information for an account. When the information for an account is entered and accepted, it changes to a read-only table row and a new set of controls appears below it for entry of the next account.
The information entered in these controls is the account number, type, description, designation, and status. The status value is contingent on the designation, and the items in the list change dynamically depending on what the user specifies for the designation. The status list is not enabled until the designation is specified.
After some experimenting with timing, I was able to get past an issue where the status list for the first account was seen by QTP as disabled even though it was clearly enabled. I was then able to advance to entry of the second account.
I change the designation on the second account and try to select an appropriate item (specified in a data table) in the status list. My specification from the data table is never found. I figured it was a problem with verbiage differences and also that I should probably anticipate that and address it now, so I wrote a function to accept three parameters, the list and up to two search items. My function searches the listbox passed to it and looks for a match (full or partial) on the search items it receives. Here is where I encountered a significant problem.
The list of the control my function received was from the previous iteration of the test, corresponding to the designation of that account. This is why my function was not finding the selection item. The list on the screen shows the appropriate items, which suggests that I am looking at the wrong object. I also get the ‘object is disabled’ message when I put my data table value directly into the list with the select statement.
The active controls are displayed below the readonly presentation of the previously entered accounts. I am very new to QTP, but I also read documentation. My only theory at this point is that ATP is not passing the right list to my function… that perhaps that how it was learned included the position, which will change each time. However, the spy identifies the screen control as the same item I processed for the preceding account, which makes my theory suspect. In addition, the other four controls, which are not dynamically changing, do not present the same problem. I can put the information in them consistently.
I apologize for the length of this question, but I wanted to be as thorough and clear as possible. Can anyone help me get past this obstacle.
There are many possiblities why it is exposing this behaviour, so let's start with something simple:
Did you try a myWebList.Refresh call before you do something with the listbox? Refresh re-identifies the object.
Have you put a break point (red dot) inside the custom function. Just see what is happening there. With the debug viewer you can enter a realtime command in the scope of that function like msgbox myWebList.exist(0) or myWebList.Highlight
Can you see how the disabled property is propagated to the webpage? If you can 'Object Spy' it as TO property, you can add it in the GUI Map description.
A more sophisticated aproach is to create a Description with the weblist properties. If you can read the disabled property as an RO property from the 'Object Spy', you can use it as an identifier like "attribute/customDisabledProperty:=false".
If you cannot correctly read the disabled property, you can create a description object and do a count on the amount of items that match that description on that page with numberOfLists = Browser("my browser").Page("my page").ChildObjects(myDescription).Count and get the last list with Set lastList = Browser("my browser").Page("my page").ChildObjects(myDescription)(numberOfLists-1)
Keep us informed. Depending on how this works out, we can work into a direction for a solution.
I figured this out early this morning. There are 4 different list boxes used, each made visible or enabled dependent on the selection of the previous list. This is why the spy found the one listed when I was using it and also why the items in the list were not appropriate to what I had selected and also why it appeared disabled to QTP but enabled to me.
I was selecting the same designation when trying to spy it. It was intuitive that the controls were all the same. I am also a windows programmer and I would have populated the same list each time with the appropriate list items, and I presumed that was what the web developer was doing. It was not and it took some time to figure that out. Now that I figured it out, everything is working fine, and I came back to report that. This was a significant, time-intensive lesson.
Thank you very much for your input. It is still useful because I am very new to QTP and every thing I learn is of value.