How to get total increments in Disp subroutine in ABAQUS - fortran

I am using Disp subroutine in ABAQUS. I want to apply a specific boundary condition to my model. This a part of my code:
IF (NODE==mst(2,1)) THEN
IF (JDOF .EQ. 1) THEN
U(1)=0.01
ELSE IF (JDOF .EQ. 2) THEN
U(1)=0
ELSE
U(1)=0
END IF
END IF
The problem is that I do not know how many incerement does ABAQUS use to solve the model. Thus, I do not know how to change the value of "U(1)=0.01" to its incremental. I know that "KINC" gives the current increment number. However, I do not know how to get the total number of increments. Morover, is there any need to change the value to incremental or ABAQUS would do it automatically?

The total number of increments is not known a priori unless you specify it (i.e. if it has troubles converging, it will decrease the step size and thus increase the step number). You can use TIME(1) and TIME(2) to know the current value of step time and reformulate your load based on that, which I think is what you're looking for.
For the second question, Abaqus will compute the increment automatically or follow a specified depending on the solver settings you prescribe. There are some user subroutines that can control the increment size but DISP is not one of them.

Related

Saving from nonlinearity in GAMS

I am trying to overcome a machine allocation problem with time horizon of 5 day. Production plan is hard to catch up, so my objective is to minimize total machines working time spent. Machines uses molds to produce and there are molds for each type of product. If a product is produced at the end of the day and if there will be production later day, total setup needed for that machine should be decreased by one. For this reason,
sets
i: mold type
j:jobs
k: days
parameters
x(i,k) ith mold production needed at day k
y(i,j) 1 if ith mold is compatible with jth machine
Decision variable
m(i,j,k) : 1 if ith mold processed in jth machine in day k 0 o/w
b(j,k) setup number of jth machine in day k
While computing the setup number for day 1, b(j,’1’), is simply equal to the sum of m(i,j,k).
For computing other days setup number I tried these but these made problem nonlinear and it takes months to solve.
b(j,'2')=e=sum(i,m(i,j,'2')) - sum(i,m(i,j,'2')*m(i,j,'1'))
By this way, if mold i is produced in both days, there will not be any setup made at second day. In order to restrain multiple setup reduction I put: sum(i,m(i,j,'2')*m(i,j,'1')) =l= 1
So, how can I decrease the setup number for a machine if it has used a mold a day before without making the problem nonlinear.
It is possible to linearize m(i,j,'2')*m(i,j,'1'):
Both(i,j) <= m(i,j,'2')
Both(i,j) <= m(i,j,'1')
Both(i,j) >= m(i,j,'2')+m(i,j,'1')-1
Both(i,j) is a binary variable
This transformation is done automatically by some solvers.
Note that there are alternative ways to model the start of a run, and often there are things to exploit (depending on the details).

Python, getting a centered average with a catch

So, my assignment is to get the centered average of a list, much like a few of the other posts on here like this one (https://codereview.stackexchange.com/questions/108404/how-to-get-a-centered-mean-excluding-max-and-min-value-of-a-list-in-python) and a few others. However, my professor has told us we are not allowed to use min, max, or sort to solve this. So what I have right now is this, it is still a work in progress:
def centered_average(nums):
high=0
low=0
a=0
b=0
for i in range(len(nums)):
if nums[i]>a:
a=nums[i]
high=a
for i in range(len(nums)):
if nums[i]<b:
b=nums[i]
low=b
total=sum(nums)
average=(total-high-low)/(len(nums)-2)
print(average)
My problem is that I can't get low to be recognized as the lowest number in the list. For example, if I input [1,2,3,4,5] as the list, my function should return 5 as the high, 1 as the low, and 3 as the centered average since 2+3+4 is 9/3=3. However, what I have right there returns the low as 0. I think it is because of the (lens(nums) since it would think the first number is a 0. I'm not sure how I should fix this.
Note: I am still a beginner at this stuff so I know what I have might not be the best or that the error could be simple to fix, but I am still in the process of learning so any help and advice would be much appreciated.
The problem is your starting the running minimum (and running maximum) as 0.
Start the running minimum as float("inf") (as everything is guaranteed to be less than that). Start the running maximum as float("-inf") (as everything is guaranteed to be greater than that).
Or, start both as the first element of the list (which is either a true minimum/maximum, or there's another element that is lower/higher than it).

C++, determine the part that have the highest zero crosses

I’m not specialist in signal processing. I’m doing simple processing on 1D signal using c++. I want really to know how I can determine the part that have the highest zero cross rate (highest frequency!). Is there a simple way or method to tell the beginning and the end of this part.
This image illustrate the form of my signal, and this image is what I need to do (two indexes of beginning and end)
Edited:
Actually I have no prior idea about the width of the beginning and the end, it's so variable.
I could calculate the number of zero crossing, but I have no idea how to define it's range
double calculateZC(vector<double> signals){
int ZC_counter=0;
int size=signals.size();
for (int i=0; i<size-1; i++){
if((signals[i]>=0 && signals[i+1]<0) || (signals[i]<0 && signals[i+1]>=0)){
ZC_counter++;
}
}
return ZC_counter;
}
Here is a fairly simple strategy which might give you some point to start. The outline of the algorithm is as follows
Input: Vector of your data points {y0,y1,...}
Parameters:
Window size sigma.
A threshold 0<p<1 defining when to start looking for a region.
Output: The start- and endpoint {t0,t1} of the region with the most zero-crossings
I won't give any C++ code, but the method should be easy to implement. As example let us use the following function
What we desire is the region between about 480 and 600 where the zero density higher than in the front. First step in the algorithm is to calculate the positions of zeros. You can do this by what you already have but instead of counting, you store the values for i where you met a zero.
This will give you a list of zero positions
From this list (you can do this directly in the above for-loop!) you create a list having the same size as your input data which looks like {0,0,0,...,1,0,..,1,0,..}. Every zero-crossing position in your input data is marked with a 1.
The next step is to smooth this list with a smoothing filter of size sigma. Here, you can use what you like; in the simplest case a moving average or a Gaussian filter. The higher you choose sigma the bigger becomes your look around window which measures how many zero-crossings are around a certain point. Let me give the output of this filter together with the original zero positions. Note that I used a Gaussian filter of size 10 here
In a next step, you go through the filtered data find the maximum value. In this case it is about 0.15. Now you choose your second parameter which is some percentage of this maximum. Lets say p=0.6.
The final step is to go through the filtered data and when the value is greater than p you start to remember a new region. As soon as the value drops below p, you end this region and remember start and endpoint. Once you are finished walking through the data, you are left with a list of regions, each defined by a start and an endpoint. Now you choose the region with the biggest extend and you are done.
(Optionally, you could add the filter size to each end of the final region)
For the above example, I get 11 regions as follows
{{164,173},{196,205},{220,230},{241,252},{259,271},{278,290},
{297,309},{318,327},{341,350},{458,468},{476,590}}
where the one with the biggest extend is the last one {476,590}. The final result looks (with 1/2 filter region padding)
Conclusion
Please don't be discouraged by the length of my answer. I tried to explain everything in detail. The implementation is really just some loops:
one loop to create the zero-crossings list {0,0,..,1,0,...}
one nested loop for the moving average filter (or you use some library Gaussian filter). Here you can at the same time extract the maximum value
one loop to extract all regions
one loop to extract the largest region if you haven't already extracted it in the above step

Stata seems to be ignoring my starting values in maximum likelihood estimation

I am trying to estimate a maximum likelihood model and it is running into convergence problems in Stata. The actual model is quite complicated, but it converges with no troubles in R when it is supplied with appropriate starting values. I however cannot seem to get Stata to accept the starting values I provide.
I have included a simple example below estimating the mean of a poisson distribution. This is not the actual model I am trying to estimate, but it demonstrates my problem. I set the trace variable, which allows you to see the parameters as Stata searches the likelihood surface.
Although I use init to set a starting value of 0.5, the first iteration still shows that Stata is trying a coefficient of 4.
Why is this? How can I force the estimation procedure to use my starting values?
Thanks!
generate y = rpoisson(4)
capture program drop mypoisson
program define mypoisson
args lnf mu
quietly replace `lnf' = $ML_y1*ln(`mu') - `mu' - lnfactorial($ML_y1)
end
ml model lf mypoisson (mean:y=)
ml init 0.5, copy
ml maximize, iterations(2) trace
Output:
Iteration 0:
Parameter vector:
mean:
_cons
r1 4
Added: Stata doesn't ignore the initial value. If you look at the output of the ml maximize command, the first line in the listing will be titled
initial: log likelihood =
Following the equal sign is the value of the likelihood for the parameter value set in the init statement.
I don't know how the search(off) or search(norescale) solutions affect the subsequent likelihood calculations, so these solution might still be worthwhile.
Original "solutions":
To force a start at your initial value, add the search(off) option to ml maximize:
ml maximize, iterate(2) trace search(off)
You can also force a use of the initial value with search(norescale). See Jeff Pitblado's post at http://www.stata.com/statalist/archive/2006-07/msg00499.html.

Optimization run assistance

I am running the optimisation of two sets of data against each other and am after some assistance as to looking up settings of the run based on the calculated results. I'll explain....
I run 2 data lines against each other (think graph lines) - Line A and Line B. These lines have crossing points - upward and downward based on the direction of each line.e.g. Line A is going up and Line B is going down is an 'Upwards cross' and Line A going down and Line B going up is a 'Downward cross'.The program calculates financial analysis.
I analyze the crossing points and gain a resultant 'Rank' from the analysis based on a set of rules. The rank is a single integer.
Line A has a number of settings for the optimisation run e.g. Window 1 from a value of 10 to 20 and window 2 at a value of 30 to 40. Line B also has settings.
When I run the optimisation I iterate through the parameters available for each line and calculate the rank. The result of the optimisation run is a list of the ranks which is the size of the number of permutations avaliable.
So my question is this:
What is the best way to look up the line settings from the calculated rank using a position (index) in the rank list. The optimisation settings used to create the run will be stored for that rank run and can be used for the look-up.
I also will be adding additional parameters in the future to the system for the line so I want the program to take into account additional future line settings without affecting any rank files created previous to adding the new parameter.
In addition to that I want to be able to find out an index based on a particular setting included in the optimisation run (the reverse look-up of the previous method).
I want to avoid versioning for backward compatability if at all possible so that the lookup algorithm will be self-sufficient.
Is a hash table suitable for this purpose or do you have any implementation techniques that would fit better? Do you have any examples of this type of operation in action in C++?
Thanks,
Chris.
If I understand correctly, you have a bunch of associated data (settings + rank), on which you would like to be able to perform lookups with different key types. If so, then Boost.MultiIndex sounds like what you're looking for.