I have following situation:
A stream of at least 10M elements. But there is a minimum of 1M, the stream must fail if the minimum is not reached.
The stream is implemented in Akka Streams like that:
Source.range(0,10_000_000).via(someBusiness())
.runWith(Sink.ignore(),actorSystem);
A sink such as Sink.seq() is not an option because I don't want such big collections in memory.
I would expect any Flow, that counts the elements and verify the count when stream ends.
The broad idea is to:
transform every incoming element from the source into a wrapper (a single-element array is probably the easiest way in Java (which is what I'm guessing you're writing this in) to accomplish this)
concat a Source.single which generates a wrapper which is easily distinguishable (e.g. a zero-element array); the stream will not go to this source until the main source has been exhausted
track the element count and throw if the minimum number of elements has not been reached and we see the concatted element; otherwise unwrap the element
So something like (apologies if the Java is atrocious)
int[] zeroElems = new int[0];
Source.range(0, 10_000_000)
.map(x -> {
int[] wrapped = new int[1];
wrapped[0] = x;
return wrapped;
})
.concat(Source.single(zeroElems))
.statefulMapConcat(
() -> {
// trick to get around Java prohibition of closing over non-final variables
final int[] counter = {0};
return (wrapper) -> {
if (counter[0] < 1_000_000) {
if (wrapper.length < 1) {
throw new RuntimeException("Insufficient number of elements in stream");
}
counter[0]++;
}
return Arrays.stream(wrapper).iterator();
}
}
)
.via(someBusiness())
.runWith(Sink.ignore(), actorSystem)
Related
Trying to iterate over a list of Objects to get the value of the last parameter from the iteration.
long lastSeen = 0L;
for(Object o : list) {
lastSeen = o.getLastSeenId();
}
// will make use of the lastSeen.
I cant do the same with lambda,
long lastSeen = 0L;
list.stream().forEach(o-> {
lastSeen = o.getLastSeenId();
});
as I will end with this compile-time.
Local variable lastSeenId defined in an enclosing scope must be final
or effectively final
I could do this to read the largest, but don't want to:
Set<Long> set = new HashSet<>();
list.stream().forEach(o-> {
set.add(mergeUser.getLastSeenId());
});
Is there a better way?
Reason to do this is to monitor the last seen value loop processed(if in case of an exception occurred to terminate).
I don't see any reason why you need to iterate the entire source to just to retrieve the last id. This could be accomplish without iteration:
long lastSeen = list.size() > 0? list.get(list.size()-1).getLastSeenId() : 0L;
However, if you plan to do some other stuff in each iteration not just the aforementioned then I'd stick with your current imperative approach. Attempting to use streams here is not a good fit and doesn't gain you anything.
First reduce the set of elements to get the last element from the Stream. Then use map to get the desired value from the last element after the reduction.
final long lastSeenValue = objects.stream()
.reduce((first, second) -> second)
.map(LastSeen::getLastSeenId)
.orElse(null);
However in my opinion, streams might not be a better choice for your problem statement. In that case don't hesitate to fall back to imperative approach.
In case you would like to keep using Stream, you may use AtomicLong to hold the lastSeenValue.
final AtomicLong lastSeen = new AtomicLong(0);
list.stream().forEach(o-> {
lastSeen.set(o.getLastSeenId());
});
First of all, I would not iterate the list for fetching the last element.
If in case, if you want to stream and get the last element, "one.util.StreamEx" has a better way to do that:
StreamEx.of(list)
.collect(MoreCollectors.last())
.get();
I have a global unique path table which can be thought of as a directed un-weighted graph. Each node represents either a piece of physical hardware which is being controlled, or a unique location in the system. The table contains the following for each node:
A unique path ID (int)
Type of component (char - 'A' or 'L')
String which contains a comma separated list of path ID's which that node is connected to (char[])
I need to create a function which given a starting and ending node, finds the shortest path between the two nodes. Normally this is a pretty simple problem, but here is the issue I am having. I have a very limited amount of memory/resources, so I cannot use any dynamic memory allocation (ie a queue/linked list). It would also be nice if it wasn't recursive (but it wouldn't be too big of an issue if it was as the table/graph itself if really small. Currently it has 26 nodes, 8 of which will never be hit. At worst case there would be about 40 nodes total).
I started putting something together, but it doesn't always find the shortest path. The pseudo code is below:
bool shortestPath(int start, int end)
if start == end
if pathTable[start].nodeType == 'A'
Turn on part
end if
return true
else
mark the current node
bool val
for each node in connectedNodes
if node is not marked
val = shortestPath(node.PathID, end)
end if
end for
if val == true
if pathTable[start].nodeType == 'A'
turn on part
end if
return true
end if
end if
return false
end function
Anyone have any ideas how to either fix this code, or know something else that I could use to make it work?
----------------- EDIT -----------------
Taking Aasmund's advice, I looked into implementing a Breadth First Search. Below I have some c# code which I quickly threw together using some pseudo code I found online.
pseudo code found online:
Input: A graph G and a root v of G
procedure BFS(G,v):
create a queue Q
enqueue v onto Q
mark v
while Q is not empty:
t ← Q.dequeue()
if t is what we are looking for:
return t
for all edges e in G.adjacentEdges(t) do
u ← G.adjacentVertex(t,e)
if u is not marked:
mark u
enqueue u onto Q
return none
C# code which I wrote using this code:
public static bool newCheckPath(int source, int dest)
{
Queue<PathRecord> Q = new Queue<PathRecord>();
Q.Enqueue(pathTable[source]);
pathTable[source].markVisited();
while (Q.Count != 0)
{
PathRecord t = Q.Dequeue();
if (t.pathID == pathTable[dest].pathID)
{
return true;
}
else
{
string connectedPaths = pathTable[t.pathID].connectedPathID;
for (int x = 0; x < connectedPaths.Length && connectedPaths != "00"; x = x + 3)
{
int nextNode = Convert.ToInt32(connectedPaths.Substring(x, 2));
PathRecord u = pathTable[nextNode];
if (!u.wasVisited())
{
u.markVisited();
Q.Enqueue(u);
}
}
}
}
return false;
}
This code runs just fine, however, it only tells me if a path exists. That doesn't really work for me. Ideally what I would like to do is in the block "if (t.pathID == pathTable[dest].pathID)" I would like to have either a list or a way to see what nodes I had to pass through to get from the source and destination, such that I can process those nodes there, rather than returning a list to process elsewhere. Any ideas on how i could make that change?
The most effective solution, if you're willing to use static memory allocation (or automatic, as I seem to recall that the C++ term is), is to declare a fixed-size int array (of size 41, if you're absolutely certain that the number of nodes will never exceed 40). By using two indices to indicate the start and end of the queue, you can use this array as a ring buffer, which can act as the queue in a breadth-first search.
Alternatively: Since the number of nodes is so small, Bellman-Ford may be fast enough. The algorithm is simple to implement, does not use recursion, and the required extra memory is only a distance (int, or even byte in your case) and a predecessor id (int) per node. The running time is O(VE), alternatively O(V^3), where V is the number of nodes and E is the number of edges.
I finally determined that this function is responsible for the majority of my bottleneck issues. I think its because of the massively excessive random access that happens when most of the synapses are already active. Basically, as the title says, I need to somehow optimize the algorithm so that I'm not randomly checking a ton of active elements before landing on one of the few that are left.
Also, I included the whole function in case of other flaws that can be spotted.
void NetClass::Explore(vector <synapse> & synapses, int & n_syns) //add new synapses
{
int size = synapses.size();
assert(n_syns <= size );
//Increase the age of each active synapse by 1
Age_Increment(synapses);
//make sure there is at least one inactive vector left
if(n_syns == size)
return;
//stochastically decide whether a new connection is added
if((rand_r(seedp) %1000) < ( x / (1 +(n_syns * ( y / 100)))))
{
n_syns++; //a new synapse has been created
//main inefficiency here
while(1)
{
int syn = rand_r(seedp) % (size);
if (!synapses[syn].active)
{
synapses[syn].active = true;
synapses[syn].weight = .04 + (float (rand_r(seedp) % 17) / 100);
break;
}
}
}
}
void NetClass::Age_Increment(vector <synapse> & synapses)
{
for(int q=0, int size = synapses.size(); q < size; q++)
if(synapses[q].active)
synapses[q].age++;
}
Pass a random number, k, in the range [0, size-n_syns) to Age_Increment. Have Age_Increment return the kth empty slot.
Since you're already traversing the whole list in Age_Increment, update that function to return the list of the indexes of inactive synapses.
You can then pick a random item from that list directly.
This is similar to the problem of finding free blocks in memory management, so I would take a look at algorithms used in that domain, specifically free lists, which is a list of free positions. (These are usually implemented as linked lists to be able to pop elements off an end efficiently. Random access in a linked list would still be O(n) - with a smaller n, but still not the best choice for your use case.)
This is not a homework.
I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further.
looking for a priority queue algorithm that matches following requirements:
queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden.
Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered.
O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))).
(optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations)
Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever.
Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored.
I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though).
So, any other ideas?
--EDIT--
I'm not interested in STL implementation. STL implementation (suggested by a few people) works a bit slower than currently used linear array due to high number of function calls.
I'm interested in priority queue algorithms, not implemnetations.
Array based heaps seem ideal for your purpose. I am not sure why you rejected them.
You use a max-heap.
Say you have an N element heap (implemented as an array) which contains the N smallest elements seen so far.
When an element comes in you check against the max (O(1) time), and reject if it is greater.
If the value coming in is lower, you modify the root to be the new value and sift-down this changed value - worst case O(log N) time.
The sift-down process is simple: Starting at root, at each step you exchange this value with it's larger child until the max-heap property is restored.
So, you will not have to do any deletes which you probably will have to, if you use std::priority_queue. Depending on the implementation of std::priority_queue, this could cause memory allocation/deallocation.
So you can have the code as follows:
Allocated Array of size N.
Fill it up with the first N elements you see.
heapify (you should find this in standard text books, it uses sift-down). This is O(N).
Now any new element you get, you either reject it in O(1) time or insert by sifting-down in worst case O(logN) time.
On an average, though, you probably will not have to sift-down the new value all the way down and might get better than O(logn) average insert time (though I haven't tried proving it).
You only allocate size N array once and any insertion is done by exchanging elements of the array, so there is no dynamic memory allocation after that.
Check out the wiki page which has pseudo code for heapify and sift-down: http://en.wikipedia.org/wiki/Heapsort
Use std::priority_queue with the largest item at the head. For each new item, discard it if it is >= the head item, otherwise pop the head item and insert the new item.
Side note: Standard containers will only grow if you make them grow. As long as you remove one item before inserting a new item (after it reaches its maximum size, of course), this won't happen.
Most priority queues I work are based on linked lists. If you have a pre-determined number of priority levels, you can easily create a priority queue with O(1) insertion by having an array of linked lists--one linked list per priority level. Items of the same priority will of course degenerate into either a FIFO, but that can be considered acceptable.
Adding and removal then becomes something like (your API may vary) ...
listItemAdd (&list[priLevel], &item); /* Add to tail */
pItem = listItemRemove (&list[priLevel]); /* Remove from head */
Getting the first item in the queue then becomes a problem of finding the non-empty linked-list with the highest priority. That may be O(N), but there are several tricks you can use to speed it up.
In your priority queue structure, keep a pointer or index or something to the linked list with the current highest priority. This would need to be updated each time an item is added or removed from the priority queue.
Use a bitmap to indicate which linked lists are not empty. Combined with a find most significant bit, or find least significant bit algorithm you can usually test up to 32 lists at once. Again, this would need to be updated on each add / remove.
Hope this helps.
If amount of priorities is small and fixed than you can use ring-buffer for each priority. That will lead to waste of the space if objects is big, but if their size is comparable with pointer/index than variants with storing additional pointers in objects may increase size of array in the same way.
Or you can use simple single-linked list inside array and store 2*M+1 pointers/indexes, one will point to first free node and other pairs will point to head and tail of each priority. In that cases you'll have to compare in avg. O(M) before taking out next node with O(1). And insertion will take O(1).
If you construct an STL priority queue at the maximum size (perhaps from a vector initialized with placeholders), and then check the size before inserting (removing an item if necessary beforehand) you'll never have dynamic allocation during insert operations. The STL implementation is quite efficient.
Matters Computational see page 158. The implementation itself is quite well, and you can even tweak it a little without making it less readable. For example, when you compute the left child like:
int left = i / 2;
You can compute the rightchild like so:
int right = left + 1;
Found a solution ("difference" means "priority" in the code, and maxRememberedResults is 255 (could be any (2^n - 1)):
template <typename T> inline void swap(T& a, T& b){
T c = a;
a = b;
b = c;
}
struct MinDifferenceArray{
enum{maxSize = maxRememberedResults};
int size;
DifferenceData data[maxSize];
void add(const DifferenceData& val){
if (size >= maxSize){
if(data[0].difference <= val.difference)
return;
data[0] = val;
for (int i = 0; (2*i+1) < maxSize; ){
int next = 2*i + 1;
if (data[next].difference < data[next+1].difference)
next++;
if (data[i].difference < data[next].difference)
swap(data[i], data[next]);
else
break;
i = next;
}
}
else{
data[size++] = val;
for (int i = size - 1; i > 0;){
int parent = (i-1)/2;
if (data[parent].difference < data[i].difference){
swap(data[parent], data[i]);
i = parent;
}
else
break;
}
}
}
void clear(){
size = 0;
}
MinDifferenceArray()
:size(0){
}
};
build max-based queue (root is largest)
until it is full, fill up normally
when it is full, for every new element
Check if new element is smaller than root.
if it is larger or equal than root, reject.
otherwise, replace root with new element and perform normal heap "sift-down".
And we get O(log(N)) insert as a worst case scenario.
It is the same solution as the one provided by user with nickname "Moron".
Thanks to everyone for replies.
P.S. Apparently programming without sleeping enough wasn't a good idea.
It's better to implement your own class using std::array and heap algorithms.
`template<class T, int fixed_size = 5>
class fixed_size_arr_pqueue_v2
{
std::array<T, fixed_size> _data;
int _size = 0;
int parent(int i)
{
return (i - 1)/2;
}
void heapify(int i, bool downward = false)
{
int l = 2*i + 1;
int r = 2*i + 2;
int largest = 0;
if (l < size() && _data[l] > _data[i])
largest = l;
else
largest = i;
if (r < size() && _data[r] > _data[largest])
largest = r;
if (largest != i)
{
std::swap(_data[largest], _data[i]);
if (!downward)
heapify(parent(i));
else
heapify(largest, true);
}
}
public:
void push(T &d)
{
if (_size == fixed_size)
{
//min elements in a max heap lies at leaves only.
auto minItr = std::min_element(begin(_data) + _size/2, end(_data));
auto minPos {minItr - _data.begin()};
auto min { *minItr};
if (d > min)
{
_data.at(minPos) = d;
if (_data[parent(minPos)] > d)
{
//this is unlikely to happen in our case? as this position is a leaf?
heapify(minPos, true);
}
else
heapify(parent(minPos));
}
return ;
}
_data.at(_size++) = d;
std::push_heap(_data.begin(), _data.begin() + _size);
}
T pop()
{
T d = _data.front();
std::pop_heap(_data.begin(), _data.begin() + _size);
_size--;
return d;
}
T top()
{
return _data.front();
}
int size() const
{
return _size;
}
};`
I am working on a query processor that reads in long lists of document id's from memory and looks for matching id's. When it finds one, it creates a DOC struct containing the docid (an int) and the document's rank (a double) and pushes it on to a priority queue. My problem is that when the word(s) searched for has a long list, when I try to push the DOC on to the queue, I get the following exception:
Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012ee88..
When the word has a short list, it works fine. I tried pushing DOC's onto the queue in several places in my code, and they all work until a certain line; after that, I get the above error. I am completely at a loss as to what is wrong because the longest list read in is less than 1 MB and I free all memory that I allocate. Why should there suddenly be a bad_alloc exception when I try to push a DOC onto a queue that has a capacity to hold it (I used a vector with enough space reserved as the underlying data structure for the priority queue)?
I know that questions like this are almost impossible to answer without seeing all the code, but it's too long to post here. I'm putting as much as I can and am anxiously hoping that someone can give me an answer, because I am at my wits' end.
The NextGEQ function reads a list of compressed blocks of docids block by block. That is, if it sees that the lastdocid in the block (in a separate list) is larger than the docid passed in, it decompresses the block and searches until it finds the right one. Each list starts with metadata about the list with the lengths of each compressed chunk and the last docid in the chunk. data.iquery points to the beginning of the metadata; data.metapointer points to wherever in the metadata the function currently is; and data.blockpointer points to the beginning of the block of uncompressed docids, if there is one. If it sees that it was already decompressed, it just searches. Below, when I call the function the first time, it decompresses a block and finds the docid; the push onto the queue after that works. The second time, it doesn't even need to decompress; that is, no new memory is allocated, but after that time, pushing on to the queue gives a bad_alloc error.
Edit: I cleaned up my code some more so that it should compile. I also added in the OpenList() and NextGEQ functions, although the latter is long, because I think the problem is caused by a heap corruption somewhere in it. Thanks a lot!
struct DOC{
long int docid;
long double rank;
public:
DOC()
{
docid = 0;
rank = 0.0;
}
DOC(int num, double ranking)
{
docid = num;
rank = ranking;
}
bool operator>( const DOC & d ) const {
return rank > d.rank;
}
bool operator<( const DOC & d ) const {
return rank < d.rank;
}
};
struct listnode{
int* metapointer;
int* blockpointer;
int docposition;
int frequency;
int numberdocs;
int* iquery;
listnode* nextnode;
};
void QUERYMANAGER::SubmitQuery(char *query){
listnode* startlist;
vector<DOC> docvec;
docvec.reserve(20);
DOC doct;
//create a priority queue to use as a min-heap to store the documents and rankings;
priority_queue<DOC, vector<DOC>,std::greater<DOC>> q(docvec.begin(), docvec.end());
q.push(doct);
//do some processing here; startlist is a pointer to a listnode struct that starts the //linked list
//point the linked list start pointer to the node returned by the OpenList method
startlist = &OpenList(value);
listnode* minpointer;
q.push(doct);
//start by finding the first docid in the shortest list
int i = 0;
q.push(doct);
num = NextGEQ(0, *startlist);
q.push(doct);
while(num != -1)
{
q.push(doct);
//the is where the problem starts - every previous q.push(doct) works; the one after
//NextGEQ(num +1, *startlist) gives the bad_alloc error
num = NextGEQ(num + 1, *startlist);
//this is where the exception is thrown
q.push(doct);
}
}
//takes a word and returns a listnode struct with a pointer to the beginning of the list
//and metadata about the list
listnode QUERYMANAGER::OpenList(char* word)
{
long int numdocs;
//create a new node in the linked list and initialize its variables
listnode n;
n.iquery = cache -> GetiList(word, &numdocs);
n.docposition = 0;
n.frequency = 0;
n.numberdocs = numdocs;
//an int pointer to point to where in the metadata you are
n.metapointer = n.iquery;
n.nextnode = NULL;
//an int pointer to point to the uncompressed block of data, if there is one
n.blockpointer = NULL;
return n;
}
int QUERYMANAGER::NextGEQ(int value, listnode& data)
{
int lengthdocids;
int lengthfreqs;
int lengthpos;
int* temp;
int lastdocid;
lastdocid = *(data.metapointer + 2);
while(true)
{
//if it's not the first chunk in the list, the blockpointer will be pointing to the
//most recently opened block and docpos to the current position in the block
if( data.blockpointer && lastdocid >= value)
{
//if the last docid in the chunk is >= the docid we're looking for,
//go through the chunk to look for a match
//the last docid in the block is in lastdocid; keep going until you hit it
while(*(data.blockpointer + data.docposition) <= lastdocid)
{
//compare each docid with the docid passed in; if it's greater than or equal to it, return a pointer to the docid
if(*(data.blockpointer + data.docposition ) >= value)
{
//return the next greater than or equal docid
return *(data.blockpointer + data.docposition);
}
else
{
++data.docposition;
}
}
//read through the whole block; couldn't find matching docid; increment metapointer to the next block;
//free the block's memory
data.metapointer += 3;
lastdocid = *(data.metapointer + 3);
free(data.blockpointer);
data.blockpointer = NULL;
}
//reached the end of a block; check the metadata to find where the next block begins and ends and whether
//the last docid in the block is smaller or larger than the value being searched for
//first make sure that you haven't reached the end of the list
//if the last docid in the chunk is still smaller than the value passed in, move the metadata pointer
//to the beginning of the next chunk's metadata; read in the new metadata
while(true)
// while(*(metapointers[index]) != 0 )
{
if(lastdocid < value && *(data.metapointer) !=0)
{
data.metapointer += 3;
lastdocid = *(data.metapointer + 2);
}
else if(*(data.metapointer) == 0)
{
return -1;
}
else
//we must have hit a chunk whose lastdocid is >= value; read it in
{
//read in the metadata
//the length of the chunk of docid's is cumulative, so subtract the end of the last chunk
//from the end of this chunk to get the length
//find the end of the metadata
temp = data.metapointer;
while(*temp != 0)
{
temp += 3;
}
temp += 2;
//temp is now pointing to the beginning of the list of compressed data; use the location of metapointer
//to calculate where to start reading and how much to read
//if it's the first chunk in the list,the corresponding metapointer is pointing to the beginning of the query
//so the number of bytes of docid's is just the first integer in the metadata
if( data.metapointer == data.iquery)
{
lengthdocids = *data.metapointer;
}
else
{
//start reading from the offset of the end of the last chunk (saved in metapointers[index] - 3)
//plus 1 = the beginning of this chunk
lengthdocids = *(data.metapointer) - (*(data.metapointer - 3));
temp += (*(data.metapointer - 3)) / sizeof(int);
}
//allocate memory for an array of integers - the block of docid's uncompressed
int* docblock = (int*)malloc(lengthdocids * 5 );
//decompress docid's into the block of memory allocated
s9decompress((int*)temp, lengthdocids /4, (int*) docblock, true);
//set the blockpointer to point to the beginning of the block
//and docpositions[index] to 0
data.blockpointer = docblock;
data.docposition = 0;
break;
}
}
}
}
Thank you very much, bsg.
QUERYMANAGER::OpenList returns a listnode by value. In startlist = &OpenList(value); you then proceed to take the address of the temporary object that's returned. When the temporary goes away, you may be able to access the data for a time and then it's overwritten. Could you just declare a non-pointer listnode startlist on the stack and assign it the return value directly? Then remove the * in front of other uses and see if that fixes the problem.
Another thing you can try is replacing all pointers with smart pointers, specifically something like boost::shared_ptr<>, depending on how much code this really is and how much you're comfortable automating the task. Smart pointers aren't the answer to everything, but they're at least safer than raw pointers.
Assuming you have heap corruption and are not in fact exhausting memory, the commonest way a heap can get corrupted is by deleting (or freeing) the same pointer twice. You can quite easily find out if this is the issue by simply commenting out all your calls to delete (or free). This will cause your program to leak like a sieve, but if it doesn't actually crash you have probably identified the problem.
The other common cause cause of a corrupt heap is deleting (or freeing) a pointer that wasn't ever allocated on the heap. Differentiating between the two causes of corruption is not always easy, but your first priority should be to find out if corruption is actually the problem.
Note this approach won't work too well if the things you are deleting have destructors which if not called break the semantics of your program.
Thanks for all your help. You were right, Neil - I must have managed to corrupt my heap. I'm still not sure what was causing it, but when I changed the malloc(numdocids * 5) to malloc(256) it magically stopped crashing. I suppose I should have checked whether or not my mallocs were actually succeeding! Thanks again!
Bsg