Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was doing some work handling a lot of information and my partner told me that I was using too many matrices to manipulate the variables of the problem. The idea was to use one dimension arrays int a[] instead of the 2 dimensional arrays int b[][], to save memory and processing speed of the algorithm. How certain is that this change will accelerate the speed of execution or compilation of my code in c ++?
Your question invites to guesswork, but:
How certain is that this change will accelerate the speed of execution or compilation of my code in c ++?
Prognosis is extremely uncertain. The only proper response is to measure.
Measuring is knowing. You can quote me on that.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 11 hours ago.
Improve this question
I program embedded systems using C++. I am learning functional programming and would love to apply it more in my work. All of my data is collected discretely and is often times required to be analyzed online.
For example, using a moving average to smooth signals. I can make the moving average function pure, but I can't make the state for the function const. Another example, I can make a pure Schmitt trigger function, but I can't make all of its inputs depend only on the current state.
What is the best way, in a functional sense, to store data from previous time steps?
Also, do you have any favorite references for embedded/functional programming?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 months ago.
Improve this question
I'm trying to think of some interesting, reusable way to implement big integers using passed amount of bytes or resizing themselves when needed. I have no idea how to make it optimal in any way tho. Are there any tricks I could use, or do I have to simply work on those numbers bit by bit while adding/multiplying/dividing?
edit: if it is important, I need it to safe text as number in base 10 so I can play with some ideas for encrypting it
Use The GNU Multiple Precision Arithmetic Library. If you try to reinvent the wheel you will end up with a square.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Develop an algorithm that can be used to determine whether a Stack object S has exactly one element.
I am not a programmer, I choose this course as an elective to see if I would be interested. I don't know where to begin with this question.
You most likely want to check the documentation of C++ on stacks. This took no time to find:
http://www.cplusplus.com/reference/stack/stack/size/
Your function has to return stack.size() == 1. If the question is asking to develop an algorithm based on the size in memory the stack size allocates then it's slightly more involved.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'm working with different sizes of Z3 bitvecs and I was working on a way to ease the workload. I'm going to get the info from an object before z3 expression was created so it's not actually a vital problem but I wondered why z3 bitvecs doesn't carry runtime size information.
You can surely query the sort of every z3 AST term, and then get the size for bvs; so, yes, they do carry size information along with pretty much everything you need to know.
The relevant calls are:
get_sort
bv_size
The API documentation has myriad other calls for scrutinizing different parts of terms, see here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm developing an estimation algorithm in C++ and performance is key. Basically there is loop where in each iteration, a decision is made on whether to add a column vector to a matrix or to remove one.
I have implemented my own matrix and vector classes and used Intel MKL for matrix operations. However after the first version I'm now looking into using Armadillo.
I would like to know what the best strategy is for dynamic growing matrices inside loops. I know the maximum size of the matrix, so I could preallocate.
First of all, is there another matrix library you would recommend other than Armadillo for small matrices (50 X 50)?
Secondly, what would be the best way to tackle this problem using Armadillo?