In a given 8 bits/32 bits, from the given bit higher position, reverse the bits upto the given lower position and save it.
Example,
Reverse_specific_bits(char *data, int pos, int num_of_bits)
Position – 5
Number of bits to be reversed = 5
Given data – 01011011
Resultant bit - 01101101
Anyone please help me to write a function for this.
Regards,
Vignesh
Simplest way of doing this is,
Save the prefix fix and post fix of bits that don`t change in two separate queues.
Iterate over the bits that need to be reversed and push them to a stack.
Get a new queue and push the prefix bits queue then the stack and finally the post fix queue.
This is a very naive and simplest way of doing this. There are many efficient ways of doing this.
Related
So in the tag prolog someone wanted to solve the "the giant cat army riddle" by Dan Finkel (see video / Link for description of the puzzle).
Since I want to improve in answer set programming I hereby challenge you to solve the puzzle more efficient than me. You will find my solution as answer. I'll accept the fastest running answer (except if it's using dirty hacks).
Rules:
hardcoding the length of the list (or something similar) counts as dirty hack.
The output has to be in the predicate r/2, where it's first argument is the index of the list and the second its entry.
Time measured is for the first valid answer.
num(0..59).
%valid operation pairs
op(N*N,N):- N=2..7.
% no need to add operations that start with 14
op(Ori,New):- num(Ori), New = Ori+7, num(New), Ori!=14.
op(Ori,New):- num(Ori), New = Ori+5, num(New), Ori!=14.
%iteratively create new numbers from old numbers
l(0,0).
{l(T+1,New) : op(Old,New)} = 1 :- l(T,Old), num(T+1), op(Old,_).
%no number twice
:- 2 #sum {1,T : l(T,Value)}, num(Value).
%2 before 10 before 14
%linear encoding
reached(T,10) :- l(T,10).
reached(T+1,10) :- reached(T,10), num(T+1).
:- reached(T,10), l(T,2).
:- l(T,14), l(T+1,_).
%looks nicer, but quadratic
%:- l(T2,2), l(T10,10), T10<T2.
%:- l(T14,14), l(T10,10), T14<T10.
%we must have these three numbers in the list somewhere
:- not l(_,2).
:- not l(_,10).
:- not l(_,14).
#show r(T,V) : l(T,V).
#show.
Having a slightly more ugly encoding improves grounding a lot (which was your main problem).
I restricted op/2 to not start with 14, as this should be the last element in the list
I do create the list iteratively, this may not be as nice, but at least for the start of the list it already removed impossible to reach values via grounding. So you will never have l(1,33) or l(2,45) etc...
Also list generation stops when reaching the value 14, as no more operation is possible/needed.
I also added a linear scaling version of the "before" section, although it is not really necessary for this short list (but a cool trick in general if you have long lists!) This is called "chaining".
Also note that your show statement is non-trivial and does create some constraints/variables.
I hope this helps, otherwise feel free to ask such questions also on our potassco mailing list ;)
My first attempt is to generate a permutation of numbers and force successor elements to be connected by one of the 3 operations (+5, +7 or sqrt). I predefine the operations to avoid choosing/counting problems. Testing for <60 is not necessary since the output of an operation has to be a number between 0 and 59. The generated List l/2 is forwarded to the output r/2 until the number 14 appears. I guess there is plenty of room to outrun my solution.
num(0..59).
%valid operation pairs
op(N*N,N):- N=2..7.
op(Ori,New):- num(Ori), New = Ori+7, num(New).
op(Ori,New):- num(Ori), New = Ori+5, num(New).
%for each position one number
l(0,0).
{l(T,N):num(N)}==1:-num(T).
{l(T,N):num(T)}==1:-num(N).
% following numbers are connected with an operation until 14
:- l(T,Ori), not op(Ori,New), l(T+1,New), l(End,14), T+1<=End.
% 2 before 10 before 14
:- l(T2,2), l(T10,10), T10<T2.
:- l(T14,14), l(T10,10), T14<T10.
% output
r(T,E):- l(T,E), l(End,14), T<=End.
#show r/2.
first Answer:
r(0,0) r(1,5) r(2,12) r(3,19) r(4,26) r(5,31) r(6,36) r(7,6)
r(8,11) r(9,16) r(10,4) r(11,2) r(12,9) r(13,3) r(14,10) r(15,15)
r(16,20) r(17,25) r(18,30) r(19,37) r(20,42) r(21,49) r(22,7) r(23,14)
There are multiple possible lists with different length.
Well straight to the point, I have two arrays say oldArray[SIZE] and newArray[SIZE]. I want to find the difference between the each element of both arrays eg:
oldArray[0]-newArray[0] =
oldArray[1]-newArray[1] =
oldArray[2]-newArray[2] =
:
:
oldArray[SIZE]-newArray[SIZE] =
If the difference is zero no worries but if the diff is >0 store the data along with index. What the best way to store. I want to send this difference data to the client over network. Only ways that I am aware of is using a vector or a dynamic array. I'd really appreciate help with this.
Update: oldArray[] and newArra[] are two image frames of a video sequence which have depth values for each pixel, I want to compute the difference between the two frames and send only the difference over the network and on the other end I will again reconstruct the image frame, data is integer range from 0 to 1024. Hope this helps
I'd go for a std::map<int,std::pair<T,T>> where key is the index in question, and the std::pair contains the old value in first and the new value in second. No entries for equal first and second.
As for your edit a std::map<int,int> where key is the index, and value is the difference might be sufficient to keep your bitmaps synchronized.
How to serialize that properly over the network is a different kettle of fish.
I'm trying to write a function that will take an array or vector and have its values taken to a "power of" and then display it's values. I'm not too familiar with arrays but simply put I'm trying to create something like
n = {2^1, 3^1, 5^1,2^2,3^2,5^2,....}
the "power of" is going to be looped.
I then plan to sort the array, and display 1500th term.
this problem corresponds to prime number sequence only divisible by 2 , 3 & 5;
I'm trying to find a more time efficient way than just if statements and mod operators.
If I remember correctly this is the Ugly Numbers problem I've faced some years ago in the UVa.
The idea to solve this problem is to use a priority queue with the numbers 2, 3 and 5 as initial values. At each step remove the topmost value t and insert the values 2*t, 3*t and 5*t in the priority queue, repeat this steps till the 1500th term is found.
See this forum for more info: http://online-judge.uva.es/board/viewtopic.php?t=93
I’m not specialist in signal processing. I’m doing simple processing on 1D signal using c++. I want really to know how I can determine the part that have the highest zero cross rate (highest frequency!). Is there a simple way or method to tell the beginning and the end of this part.
This image illustrate the form of my signal, and this image is what I need to do (two indexes of beginning and end)
Edited:
Actually I have no prior idea about the width of the beginning and the end, it's so variable.
I could calculate the number of zero crossing, but I have no idea how to define it's range
double calculateZC(vector<double> signals){
int ZC_counter=0;
int size=signals.size();
for (int i=0; i<size-1; i++){
if((signals[i]>=0 && signals[i+1]<0) || (signals[i]<0 && signals[i+1]>=0)){
ZC_counter++;
}
}
return ZC_counter;
}
Here is a fairly simple strategy which might give you some point to start. The outline of the algorithm is as follows
Input: Vector of your data points {y0,y1,...}
Parameters:
Window size sigma.
A threshold 0<p<1 defining when to start looking for a region.
Output: The start- and endpoint {t0,t1} of the region with the most zero-crossings
I won't give any C++ code, but the method should be easy to implement. As example let us use the following function
What we desire is the region between about 480 and 600 where the zero density higher than in the front. First step in the algorithm is to calculate the positions of zeros. You can do this by what you already have but instead of counting, you store the values for i where you met a zero.
This will give you a list of zero positions
From this list (you can do this directly in the above for-loop!) you create a list having the same size as your input data which looks like {0,0,0,...,1,0,..,1,0,..}. Every zero-crossing position in your input data is marked with a 1.
The next step is to smooth this list with a smoothing filter of size sigma. Here, you can use what you like; in the simplest case a moving average or a Gaussian filter. The higher you choose sigma the bigger becomes your look around window which measures how many zero-crossings are around a certain point. Let me give the output of this filter together with the original zero positions. Note that I used a Gaussian filter of size 10 here
In a next step, you go through the filtered data find the maximum value. In this case it is about 0.15. Now you choose your second parameter which is some percentage of this maximum. Lets say p=0.6.
The final step is to go through the filtered data and when the value is greater than p you start to remember a new region. As soon as the value drops below p, you end this region and remember start and endpoint. Once you are finished walking through the data, you are left with a list of regions, each defined by a start and an endpoint. Now you choose the region with the biggest extend and you are done.
(Optionally, you could add the filter size to each end of the final region)
For the above example, I get 11 regions as follows
{{164,173},{196,205},{220,230},{241,252},{259,271},{278,290},
{297,309},{318,327},{341,350},{458,468},{476,590}}
where the one with the biggest extend is the last one {476,590}. The final result looks (with 1/2 filter region padding)
Conclusion
Please don't be discouraged by the length of my answer. I tried to explain everything in detail. The implementation is really just some loops:
one loop to create the zero-crossings list {0,0,..,1,0,...}
one nested loop for the moving average filter (or you use some library Gaussian filter). Here you can at the same time extract the maximum value
one loop to extract all regions
one loop to extract the largest region if you haven't already extracted it in the above step
I need to keep track of indexes in a large text file. I have been keeping a std::map of indexes and accompanying data as a quick hack. If the user is on character 230,400 in the text, I can display any meta-data for the text.
Now that my maps are getting larger, I'm hitting some speed issues (as expected).
For example, if the text is modified at the beginning, I need to increment the indexes after that position in the map by one, an O(N) operation.
What's a good way to change this to O(log N) complexity? I've been looking at AVL Arrays, which is close.
I'm hoping for O(log n) time for updates and searches. For example, if the user is on character 500,000 in the text array, I want to very quickly find if there is any meta data for that character.
(Forgot to add: The user can add meta data whenever they like)
Easy. Make a binary tree of offsets.
The value of any offset is computed by traversing the tree from the leaf to the root adding offsets any time a node is a right child.
Then if you add text early in the file you only need to update the offsets for nodes which are parents of the offsets that change. That is say you added text before the very first offset, you add the number of characters added to the root node. now one half of your offsets have been corrected. Now traverse to the left child and add the offset again. Now 3/4s of offsets have been updated. Continue traversing left children adding the offset until all the offsets are updated.
#OP:
Say you have a text buffer with 8 characters, and 4 offsets into the odd bytes:
the tree: 5
/ \
3 2
/ \ / \
1 0 0 0
sum of right
children (indices) 1 3 5 7
Now say you inserted 2 bytes at offset 4. Buffer was:
01234567
Now its
0123xx4567
So you modify just nodes that dominate parts of the array that changed. In this case just
the root node needs to be modified.
the tree: 7
/ \
3 2
/ \ / \
1 0 0 0
sum of right
children (indices) 1 3 7 9
The summation rule is walking from leaf to root I sum to myself, the value of my parent if I am that parent's right child.
To find if there is an index at my current location I start at the root and ask is this offset greater smaller than my location. If yes I traverse left and add nothing. If no I traverse right and add the value to my index. If at the end of traversal my value is equal to my index then yes there is an annotation. You can do a similar traverals with a minimum and maximum index to find the node that dominates all the indices in the range, finding all the indices to the text I'm displaying.
Oh.. and this is just a toy example. In reality you need to periodically rebalance the tree otherwise there is a chance that if you keep adding new indices just in one part of the file you will get a tree which is way out of balance, and worst case performance would no longer be O(log2 n) but would be O(n). To keep the tree balanced you would need to implement a balanced binary tree like a "red/black tree". That would guarantee O(log2 n) performance where N is the number of metadatas.
Don't store indices! There's no way to possibly do that and simultaneously have performance better than O(n) - add a character at the beginning of the array and you'll have to increment n - 1 indices, no way around it.
But if you store substring lengths instead, you'd only have to change one per level of tree structure, bringing you up to O(log n). My (untested) solution would be to use a Rope with metadata attached to the nodes - you might need to play around with that a bit, but I think it's a solid foundation.
Hope that helps!