Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
So I have to write a greedy algorithm that given a linked list or array of pairs (t_i, w_i), I have to produce a schedule that generates the maximum amount of money within time T. t_i would be the maximum number of hours that can worked for that job, w_i would be the wage in dollars per hour and T would be the maximum number of hours that person is willing to work per week. Jobs don't need to be worked until t_i, as the worker can choose to switch jobs early and get paid for each hour they worked.
So I wrote a greedy algorithm where you just get the job with the highest wage per hour and work that job for as long as you can, then get the job with the second wage per hour and work that job for as long as you can and so on till you work T hours.
I am now being asked to use an exchange argument to prove that my pseudocode produces an optimal schedule. In my opinion, my pseudocode solution is already the optimal schedule, so what what I compare it to then?
I'll give you the idea of the demonstration without writting it entirely.
You need to take an optimal solution S. You can tell that this solution can grant x $, where x is the maximum money you can make.
Then you take your solution G witch is greedy.
Now you say the first job you chose in G, this job is j1 and the time spent is t1. There is two possibility, either j1 is in S or it's not. If it's not by replacing one job in S by j1 you get a better solution (or at least as good) BUT S is optimal so you can conclude that j1 is in S. You can do the same kind of reasoning to prove that the time associed with j1 in S is t1.
Then you do that for every element. And you'll conclude that the amount of money x earned is the same in G and in S. So G is optimal
Related
I am trying to formulate a problem that will spit out an optimal schedule for my tasks to be completed. To keep the information confidential, I will refer to my tasks as papers that need to be written. Here is the premise of my problem.
There are 320 papers to be written. (All writers can write these papers). Each paper takes a different amount of time to complete.
We have 2 types of workers available to complete this set of papers.
We have 150 writers, whose responsibility it is to actually write the paper.
We have 25 movers, whose responsibility it is to take the completed papers and go and grab a new paper for the writers to work on. For the sake of simplicity, I am assuming that the time to take a completed paper and deliver a new one is constant for each move.
The goal of this problem will be to minimize the total length of time it takes to write all of these papers with my staff. We are restricted by the following:
How many writers we have to write papers at the same time
How many movers are available to move papers at the same time
Each mover takes 25 minutes to move a paper for the writer
Movers cannot move papers for writers that are within 2 writers of each other at the same time (If writer 3 has completed his paper and a mover begins moving a paper for them, then writers 1,2,4, and 5 will have to wait until the mover for writer 3 has finished their move). This constraint is meant to represent a physical limitation we have at our facility.
My Approach:
It has been some time since I've properly done LP so I am rusty. I have defined the following variables but am not sure if these are good or not. I don't know whether to consider time $t$ as a parameter for these variables or as its own variable and this is what I'm mainly struggling with.
$D_j$: The length of time for paper $j$ to be completed.
$S_{j,w}$: The point in time when writer $w$ begins writing paper $j$.
$X_{j,w}$: Binary variable representing whether or not a paper $j$ is being written by writer $w$.
$M_{m,w}$: Whether or not mover $m$ moves a paper for writer $w$
Constraints that I have come up with are as follows:
$\sum_{w \in W} X_{p,w} = 1$
$D_w , S_{p,w} \ge 0$
I am struggling with how to wrap my head around how to factor in a timeline as either a variable or some set or whatever.
Edit: I've spent some more time and discovered that this is a common difficulty with this type of problem(yay!). The two routes to be taken are to consider time as either a discrete or a continuous variable. Though the precision would be nice, the data I have at the facility is available per minute so I think treating time as a discrete variable with one-minute intervals is reasonable.
I would like to be able to get an output that gives me an optimal schedule for the papers to be written and for the output to tell me which papers are being completed by which writers at what time. I will be as active as I can in the comments if there needs any clarification.
Note: I have also posted this question on OR.SE link:
https://or.stackexchange.com/questions/2734/how-to-formulate-scheduling-matrix-problem-with-mixed-integer-linear-programming
I have also posted this on Math.SE
link: https://math.stackexchange.com/questions/3384542/should-i-factor-in-time-as-a-parameter-or-a-variable-in-a-scheduling-problem-wit
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In embedded/mobile environment battery power drain has to be considered when we are developing Software for it, therefore energy efficient software programming matters in embedded/mobile world.
Problem is,(regarding Assembly/C/C++)
Think we have stable software release when it's run on arbitrary platform X it consumesY amount of power(in Watts), now we are going to carry out some code change and we want to measure how it's gonna effect on energy consumption and its efficiency at build time.
INT16U x = OXFFFF;
/... some code in stable release .../
for(;x<4096;++x)
{
/... some code in stable release .../
INT64U foo = x >> 256 ? x : 4096 ; // point 1 sample code change;
if(~foo & foo) foo %= 64 ; // point 1 sample code change;
/... some code in stable release .../
}
Simply if we want to measure how this code change #point1 effects on energy efficiency(relative to the stable release statistics) rather than profiling space and time (performance plus memory), If we want to build a simple energy/power analyzing and profiling tool in C/ C++,
Is there any recommended C/C++ library or source to build power analysis tool?
If we have to analyze and determine the change in power consumption levels through CPU/GPU instruction level changes for each code change for example as in point1, how can we determine power consumption for each instruction for arbitrary CPU or GPU on the respective platform?
How can developer aware of how much of power consumption reduced or increase due to his code change at Application build time rather than runtime?
TL;DR You can't do it with software, only with a physical meter.
To refine what "#Some programmer dude" already hinted at:
One problem is, that the actual implementation of a certain operation on hardware is unknown. You might get lists with cycles/opcode but you do not know what those cycles do. The can go a long route around, some need more parts to pass some less and so on, hence it is unknown how much power the individual cycle needs.
Another problem are the nearly indeterministic paths in complex code with a large-domain (e.g. 16-bit ADCs) and multiple inputs (e.g.: reading several sensors at once.), especially if you work with floating-point arithmetic.
It is possible to get relative differences in power consumption but only coarse ones. Coarse like in "100 loops of the same code need more power than 10" coarse. Or: if it runs faster it most likely needs less power.
No, you have to swallow the bitter pill and go to the next Rhode&Schwarz (not affiliated, just saw an ad in the sidebar at the moment of writing this) shop and get a power-supply, a meter (two, actually), a frequency generator and the necessary connecting material. Will set you short in the mid to upper five digit range (US $). Once you have it you need to measure the power consumption of several MCUs/CPUs (ca. 30+ to be able to assume uniform distribution) of every single batch of the processor you got.
That is a lot of work and investment if you don't have the tools already. Measuring is also a form of art in itself, you need to know what you are doing, a lot of things can get wrong.
It might be a good idea to spend that money if you want a million dollar contract with the military and need to guarantee your stuff to be able to run five years on a single battery (and the first thing said military does is slapping a sticker on it that says "Change battery every 6 month!") otherwise: don't even start, not worth the headaches.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is there a minimum number of instruction guaranteed to be executed by a thread during any given time slot. The Wikipedia page for Execution Model says "The addition operation is an indivisible unit of work in many languages"
I would like to learn more about the execution model of POSIX Thread used with C/C++ and minimum number of indivisible instruction or statements guaranteed to be executed in a single time slot. Can someone give a pointer from where I can learn more on this. Thanks is advance
No, there are no guarantees on the number of instructions per time. The way things work is more complicated than executing a set number of instructions anyway.
Executed instructions depends more on the processor architecture than the language. The "traditional" MIPS architecture taught in many introductory design courses would execute one instruction per clock cycle; a processor designed like this running at 1MHz would execute one million operations per second. Real-world processors use techniques such as pipelines, branch prediction, "hyper-threading", etc. and do not have a set number of operations per clock cycle.
Beyond that, real-world processors will generally function under an operating system with multi-tasking capabilities. This means that a thread can be interrupted by the kernel at unknown points, and not execute any code at all as other threads are given processor time. There are "real-time" operating systems that are designed to give more guarantees about how long it takes to execute the code running on the processor.
You have already done some research on Wikipedia; some of the keywords above should help track down more articles on the subject, and from there you should be able to find plenty of primary sources to learn more on the subject.
In POSIX threads, there are two main scheduling policies (FIFO and Round Robin). Round Robin is the default scheduler as it's more fair.
When RR scheduler is used, each thread has an amount of time (AKA quantum) to run, so there's no guarantee that an X number of instructions will get executed - unless we knew how much time each instruction takes.
You can find more about scheduling algorithms on PThreads here: http://maxim.int.ru/bookshelf/PthreadsProgram/htm/r_37.html
Just to give an idea on how Linux defines the round round quantum:
/*
* default timeslice is 100 msecs (used only for SCHED_RR tasks).
* Timeslices get refilled after they expire.
*/
#define RR_TIMESLICE (100 * HZ / 1000)
#endif /* _LINUX_SCHED_RT_H */
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
To compare C++ and Java on certain tasks I have made two similar programs, one in Java and one in C++. When I run the Java one it takes 25% CPU without fluctuation, which you would expect as I'm using a quad core. However, the C++ version only uses about 8% and fluctuates havily. I run both programs on the same computer, on the same OS, with the same programs active in the background. How do I make C++ use one full core? These are 2 programs both not interrupted by anything. They both ask for some info and then enter an infinite loop until you exit the program, giving feedback on how many calculations per second.
The code:
http://pastebin.com/5rNuR9wA
http://pastebin.com/gzSwgBC1
http://pastebin.com/60wpcqtn
To answer some questions:
I'm basically looping a bunch of code and seeing how often per second it loops. The problem is: it doesn't use all the CPU it can use. The whole point is to have the same processor do the same task in Java and C++ and compare the amount of loops per second. But if one is using irregular amounts of CPU time and the other one is looping stable at a certain percentage they are hard to compare. By the way, if I ask it to execute this:
while(true){}
it takes 25%, why doesn't it do that with my code?
----edit:----
After some experimenting it seems that my code starts to use less than 25% if I use a cout statement. It isn't clear to me why a cout would cause the program to use less cpu (I guess it pauses until the statement is written which appearantly takes a while?
With this knowledge I will reprogram both programs (to keep them comparable) and just let it report the results after 60 seconds instead of every time it completed a loop.
Thanks for all the help, some of the tips were really helpful. After I discovered the answer someone also turned out to give this as an answer, so even if I wouldn't have found it myself I would have gotten the answer. Thanks!
(though I would like to know why a std::cout takes such an amount of time)
Your main loop has a cout in it, which will call out to the OS to write the accumulated output at some point. Either OS time is not counted against your app, or it causes some disk IO or other activity that forces your program to wait.
It's probably not accurate to compare both of these running at the same time without considering the fact that they will compete for cpu time. The OS will automatically choose the scheduling for these two tasks which can be affected by which one started first and a multitude of other criteria.
Running them both at the same time would require some type of configuration of the scheduling so that each one is confined to run to one (or two) cpus and each application uses different cpus. This can be done by having each main function execute a separate thread that performs all the work and then setting the cpu where this thread will run. In c++11 this can be done using a std::thread and then setting the underlying cpu affinity by getting the native_handle and setting it there.
I'm not sure how to do this in Java but I'm sure the process is similar.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What's the usual way of controlling frame rate?
I'm an amateur when it comes to programming, but I wanna ask if this is an efficient way of handling things.
Right now, my program currently updates itself with every step, but I am looking to divide the program up into a smaller frame rate. My current idea is to set a clock in main, where with every 30 ticks or so (for example), the game will update itself. However, I am looking to update the different parts of the program separately within that slot of time (for instance, every 10 seconds) with the program updating the screen at the end of that period. I figured that this will help to alleviate some of the "pressure" (assuming that there is any).
I wouldn't go that way. It's going to be much better/easier/cleaner, especially when starting, to update the game/screen as often as possible (e.g. put it in a while(true) loop). Then each iteration, figure out the elapsed time and use that accordingly. (e.g. Move an object 1 pixel for every elapsed 20ms) or something
The reason that this is a better starting point, is you'll be hard-pressed to guarantee exactly 30fps and the game will behave weirdly (e.g. if a slow computer can only pull 15fps, you don't want objects going twice the speed) and not to mention drift/individual slow frames etc.