My anylogic program simulates the arrival of two different type of product, identified by an option list. I want to seize this product on a forklift and then use MoveTo to bring the product1 on Node1 and product 2 to Node2. I have some trouble with using condition in MoveTo and probabily i need a variable that contains the destination node, but it doesn't work.
Related
My anylogic program simulates the arrival of 2 different type of product, identified by an option list. I want to seize this product on a forklift and then use MoveTo to bring the product1 on Node1 and product 2 to Node2. I have some trouble with using condition in MoveTo and probabily i need a variable that contains the destination node, but it doesn't work. if you can suggest me a function to pick the right node and how to use it in the moveTo I appreciate, thank you
I'm using the page below a POS sales list. Here the user can use the barcode pistol and pass the article and the code is translated into the item no.
The problem is when they use the pistol and end to pick a item and want to pass to next one the line go automatically to the first column (Item type) and my goal was to force to go into the second column (Item no), because the Item type is by default the type "product".
Only change the order of columns of Item no to Item product is not enough in this case.
Since ACTIVATE is not supported for controls in RTC.
Not many good options here.
Try using QuickEntry Property. Set it to false for all controls on subpage except No..
Create custom page with as less fields as possible, use it as buffer to scan all items and create sales lines upon closing of this new page. You can implement desired behavior on this page and keep original page almost unmodified
Create add-in that will intercept scanner output somehow.
I'm trying to come up with an algorithm to do the following in a map reduce. I receive a bunch of objects and the user ids of the owner. In other words, I receive a bunch of pairs:
(object, uid)
I want to end up with a list of pairs (object,count), where count refers to the number of times the object occurs in the list. The caveat is that we would need to filter everything as follows:
We should only include object pairs such that the object is repeated for at least n different uids.
We should only include objects such that the total count of times it repeats is at least m.
Objects and users are all represented as integers. The problem is that it would be trivial to convert each (object,uid) pair into (object, 1) and then reduce together these by summing the second integers. I could then filter everything that doesn't hit the threshold of (2). However, at this point I would have lost the information necessary to filter by (1), which is what I don't know how to incorporate into this. Anyone have any suggestions?
The easiest and most natural way is to run two MR jobs in sequence. Goal of the first job is to count how much times each object is owned by each uid. Result is triplets (object, uid, count). uid field here is for debugging purpose only -- it is not required in second job. Second job groups triplets by object. In the end of each reduce() call, you know:
number of different uids for object (number of received triplets)
total number of how much time object is owned (sum of count fields)
So, now you may apply both filters.
Single-job setup is also possible, but it requires manipulating with job on a bit lower level with setSortComparatorClass(), setGroupingComparatorClass() and setPartitionerClass(). Idea is that map() should emit composite keys which contain both object and uid fields, value is not used at all (NullWritable):
Partitioner class partitions keys only by using object field of the key. This guarantees that all records with the same object will go to the same reduce task.
SortComparator class is implemented in such way that first it compares object field, and if they are identical, uid field.
GroupingComparatorClass uses only object field for comparison.
In the result, input of single reduce task will look like following:
object1 uid1
object1 uid2
object1 uid2
object1 uid2
object1 uid5
object1 uid6
object1 uid6
------------ <- boundary of call to reduce
object7 uid1
object7 uid1
object7 uid5
------------- <-- boundary of call to reduce()
object9 uid3
As you can see, uids are strictly ordered inside each call to reduce(), which allows you to count both number of distinct and non-distinct uids simultaneously.
I have a dataset that has the following format:
Company|Dependent var|Independent vars|Company ID|Date|dummy1|dummy2
A|Values|Values|1|01/01/2015|0|1
A|Values|Values|1|01/01/2015|1|0
A|Values|Values|1|01/01/2014|1|0
B|Values|Values|2|01/01/2015|0|1
B|Values|Values|2|01/01/2014|0|1
As you can see, companies can have multiple values at the same period (as they are rated by 2 different agencies). The problem then arises when I use xtset to define my panel data it throws the "repeated time values within panel". I wish to cluster errors by company and so I define the panel data set using "xtset CompanyID Date". Is there a way I can get round the error?
I wish to distinguish between the two entries that stata perceives as the same (i.e. but isn't as the dummy variables differentiate between them) but still cluster errors bases on company (using company id). Do I need to create a new id? Will this lose clustering by company?
Any help would be appreciated.
Laurence
Follow up: Basically I found that I am dealing with what is known as a multidimensional panel (e.g. y_i_j_k) not a 2 dimensional panel (y_i_j) and as such you cant do two dimensional commands on a >2 dimensional panel. As such I needed to reframe the panel 2 two dimensions by creating a new ID (egen newID = group(companyID Dummy1 Dummy2) This then allows you to use two dimensional commands. I think you can then group the data later using cluster (vce(cluster clustervar)). Thanks
Say you have an ordered list. You order the list based on a model field called "index". So the first item has an index of 0, the second has an index of 1 and so on...
The index is unique to that model object.
How would you implement this?
I want to be able to create more instances of the model object, where the index is assigned to the next available index (add the object to the end of the list). And then be able to reorder the list so that if you change the index of an object, all the following object's indexes increase by one.
If you want an IntegerField that increments itself you can use the id. It's unique for that model and Django generates it automatic. And you can order by this using
inverse:
MyModel.objects.all().order_by('-id')
Normal order:
MyModel.objects.all().order_by('id')
If you just have a field that contains auto-increment-index don't need to create another one only if you can modify it, but if this index is unique you cannot edit it to prevent duplicates. Si I would use the id MyModel.id
Here you have the documentation for .order_by()
There is no field that does that automatically. Have you looked in to using signals for this? You could hook up a signal that detects an index change, and triggers a function that updates the index of every object whose current index is greater than the one being removed/changed.
You may ave to rethink your schema because if you change the index of your first element in the list which has lets say 1 million elements, you are gonna update 1 million objects! You may save for each object its left and right "neighbour" and create a method to get the list.