I use modules written by others, as follows:
https://github.com/tarzanjw/pysnowflake/blob/master/snowflake/server/generator.py
I first ran a single-threaded test by myself. There will be no duplication at all, and there is no duplication at all when multi-threaded locks are added.
But when I generate it in the actual project with Django, about 10 Threads submit a total of 1600 pieces of data, and about dozens of duplicates will be generated each time
I printed the ID, and this snowflake generator instance has not been repeat initialized. How can I troubleshoot?
The approximate code is as follows:
from newsnow import Generator
logger = logging.getLogger('django-production')
get_flake_id = Generator(dc=0, worker=0)
def create_product_meta(prepare_product_meta):
new_product = models.productMeta(
own_store=prepare_product_meta.get("own_store").upper(),
product_name=prepare_product_meta.get("product_name"),
)
logger.warning(id(get_flake_id))
new_product.flake_id = get_flake_id.get_next_id()
return new_product
Any suggestions? thanks!
As far I see, the code does not use any Snowflake object. It generates the sequences using a simple calculation:
generated_id = ((curr_time - EPOCH_TIMESTAMP) << 22) | (self.node_id << 12) | self.sequence
It uses node_id when calculating the sequence, and it's calculated with the following formula:
self.node_id = ((dc & 0x03)<< 8) | (worker & 0xff)
So as a workaround, you may give different dc and worker values for each thread.
Or if you can use database objects, you can create a Snowflake sequence and handle the uniqueness of the seuqnce in the database side:
https://docs.snowflake.com/en/user-guide/querying-sequences.html
The reason has been found.
Django uses multi-process startup, and the attributes between instances are isolated from each other.
It is necessary to use pid as worker_id to physically isolate the processes.
Like this:
get_flake_id = Generator(dc=0, worker=os.getpid())
Related
In a distributed Erlang system pids can have two different representations: i) internal; ii) external.
The internal representation has the following shape: < A.B.C >
The external representation, used for instance when a message has to travel across different nodes, is instead composed of the following elements: < node_id, ID, serial, creation > according to the official documentation.
Where node_id is the name of the node, ID and serial identify the process on node_id and creation is an integer used to distinguish the node from past (crashed) version of itself.
What I could not find is how the creation integer is created by the VM.
By setting a small experiment on my PC, I have seen that if I create and kill the same node several times the counter is always increased by 1, and by creating the same node on different machines, the creation integers are different, but have some similarities in their structure, for instance:
machine 1 -> creation integer = 1647595383
machine 2 -> creation integer = 1647596018
Do any of you have any knowledge about how this integer is created? If so could you please explain it to me and possibly reference some (more or less) official documentation?
The creation is sent as a part of the response to node registration in epmd, see details on that protocol.
If you have a custom erl_epmd module, you can also provide your own way of creating the creation-value.
The original creation is the local time of when the node with that name is first registered, and then it is bumped once for each time the name is re-registered.
We have an application which user can run to generate some data at user specified path. This unique output data is generated with respect to one unique input data-set - this input data is provided by the user.
When we initially developed the application, we never anticipated that number of unique input data-set will be large (due to nature of application). Our expectation was number of unique input data-set could be of order of 10 where as one user has this as 1000. So, that particular user started 1000 jobs of our application on grid and all writing data to same path. Note - these 1000 jobs are not fired from our application and rather he spawned 1000 processes of our application on different machines.
Now this lead to some collision and data loss.
To guard against it, I am planning to synchronization using boost::interprocess. This is what I am planning:
// usual processing of input data ...
boost::filesystem::path reportLockFilePath(boost::filesystem::system_complete(userDir));
rerportLockFilePath.append("report.lock");
// if lock file does not exist, create one
if (!boost::filesystem::exists(reportLockFilePath) {
boost::interprocess::named_mutex reportLockMutex(boost::interprocess::open_or_create, "report_mutex");
boost::interprocess::scoped_lock< boost::interprocess::named_mutex > lock(reportLockMutex);
std::ofstream lockStrm(reportLockFilePath.string().c_str());
lockStrm << "## report lock file ##" << std::endl;
lockStrm.flush();
}
boost::interprocess::file_lock reportFileLock(reportLockFilePath.string().c_str());
boost::interprocess::scoped_lock< boost::interprocess::file_lock > lock(reportFileLock);
// usual reporting code that we already have ...
Now, questions are -
If this is correct synchronization for the problem at hand
If this synchronization scheme will work, when jobs are on different machines and path is on NFS
If on NFS etc., this is not going to work, what are the C++ alternatives? I prefer to avoid lower level C functions to avoid race condition due to lock being held when one instance of execution crashes etc.
I just removed the named mutex part (as that was causing problem on few machines due to permission issue - probably related to umask issue discussed in this context in some other post) and replaced with
std::ofstream lockStrm(reportLockFilePath.string().c_str(), std::ios_base::app);
And it worked at least in our internal testing.
In essence, the following function, called by the user of the django application that I am developing, uses the Scapy library to process 80-odd fairly large pcaps in order to initially parse their destination IP addresses.
I was wondering whether it would be possible to process several pcaps simultaneously, as the CPU is not being utilised to it's full capacity, ideally using multi-threading
def analyseall(request):
allpcaps = Pcaps.objects.all()
for individualpcap in allpcaps:
strfilename = str(individualpcap.filename)
print(strfilename)
pcapuuid = individualpcap.uuid
print(pcapuuid)
packets = rdpcap(strfilename)
print("hokay")
for packet in packets:
if packet.haslayer(IP):
# print(packet[IP].src)
# print(packet[IP].dst)
dstofpacket = packet[IP].dst
PcapsIps.objects.update_or_create(ip=dstofpacket, uuid=individualpcap)
return render(request, 'about.html', {"list": list})
You can use above answer (multiprocessing), and also improve scapy’s reading speed, by using the PcapReader generator rather than rdpcap
with PcapReader(filename) as fdesc:
for pkt in fdesc:
[actions on the pkt]
I consider mixing multiprocessing and Django tricky. I was working on such solution once and finally I decided to use Celery and RabbitMQ.
Using Celery you can easily define task of processing single pcap. Then you can start a few independent workers for processing files in the background. Such solution will result in a little more complicated architecture (you need to provide message queue e. g. RabbitMQ and the Celery workers), however you can gain a much simpler code.
http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html
In my case Celery saved a lot of time.
You can also check this question and answers:
How to use python multiprocessing module in django view
What type of parallel Python approach would be suited to efficiently spreading the CPU bound workload shown below. Is it feasible to parallelize the section? It looks like there is not much tight coupling between the loop iterations i.e. portions of the loop could be handled in parallel so long as an appropriate communication to reconstruct the store variable is done at the end. I'm currently using Python2.7, but if a strong case could be made that this problem can be easily handled in a newer version, then I will consider migrating the code base.
I have tried to capture the spirit of the computation with the example below. I believe that it has the same connectedness between the loops/variables as my actual code.
nx = 20
ny = 30
myList1 = [0]*100
myList2 = [1]*25
value1 = np.zeros(nx)
value2 = np.zeros(ny)
store = np.zeros(nx,ny,len(myList1),len(myList2))
for i in range(nx):
for j in range(ny):
f = calc(value1[i],value2[j]) #returns a list
for k,data1 in enumerate(myList1):
for p,data2 in enumerate(myList2):
meanval = np.sum(f[:]/data1)*data2
store[i,j,k,p] = meanval
Here are two approaches you can take. What's wise also depends on where the bottleneck is, which is something that can best be measured rather than guessed.
The ideal option would be to leave all low level optimization to Numpy. Right now you have a mix of native Python code and Numpy code. The latter doesn't play well with loops. They work, of course, but by having loops in Python, you force operations to take place sequentially in the order you specified. It's better to give Numpy operations that it can perform on as many elements at once as possible, i.e. matrix transformations. That benefits performance, not only because of automatic (partial) parallelization; even single threads will be able to get more out of the CPU. A highly recommended read to learn more about this is From Python to Numpy.
If you do need to parallelize pure Python code, you have few options but to go with multiple processes. For that, refer to the multiprocessing module. Rearrange the code into three steps:
Preparing the inputs for every job
Dividing those jobs between a pool of workers to be run in parallel (fork/map)
Collecting the results (join/reduce)
You need to strike a balance between enough processes to make parallelizing worthwhile, and not so many that they will be too short-lived. The cost of spinning up processes and communicating with them would then become significant by itself.
A simple solution would be to generate a list of (i,j) pairs, so that there will nx*ny jobs. Then make a function that takes such pair as input and returns a list of (i,j,k,p,meanval). Try to only use the inputs to the function and return a result. Everything local; no side effects et cetera. Read-only access to globals such as myList1 is okay, but modification requires special measures as described in the documentation. Pass the function and the list of inputs to a worker pool. Once it has finished producing partial results, combine all those into your store.
Here's an example:
from multiprocessing import Pool
import numpy as np
# Global variables are OK, as long as their contents are not modified, although
# these might just as well be moved into the worker function or an initializer
nx = 20
ny = 30
myList1 = [0]*100
myList2 = [1]*25
value1 = np.zeros(nx)
value2 = np.zeros(ny)
def calc_meanvals_for(pair):
"""Process a reasonably sized chunk of the problem"""
i, j = pair
f = calc(value1[i], value2[j])
results = []
for k, data1 in enumerate(myList1):
for p, data2 in enumerate(myList2):
meanval = np.sum(f[:]/data1)*data2
results.append((i,j,k,p,meanval))
return results
# This module will be imported by every worker - that's how they will be able
# to find the global variables and the calc function - so make sure to check
# if this the main program, because without that, every worker will start more
# workers, each of which will start even more, and so on, in an endless loop
if __name__ == '__main__':
# Create a pool of worker processes, each able to use a CPU core
pool = Pool()
# Prepare the arguments, one per function invocation (tuples to fake multiple)
arg_pairs = [(i,j) for i in range(nx) for j in range(ny)]
# Now comes the parallel step: given a function and a list of arguments,
# have a worker invoke that function with one argument until all arguments
# have been used, collecting the return values in a list
return_values = pool.map(calc_meanvals_for, arg_pairs)
# Since the function also returns a list, there's now a list of lists - consider
# itertools.chain.from_iterable to flatten them - to be processed further
store = np.zeros(nx, ny, len(myList1), len(myList2))
for results in return_values:
for i, j, k, p, meanval in results:
store[i,j,k,p] = meanval
Just trying to get my head around the programming model here. Scenario is I'm using Pub/Sub + Dataflow to instrument analytics for a web forum. I have a stream of data coming from Pub/Sub that looks like:
ID | TS | EventType
1 | 1 | Create
1 | 2 | Comment
2 | 2 | Create
1 | 4 | Comment
And I want to end up with a stream coming from Dataflow that looks like:
ID | TS | num_comments
1 | 1 | 0
1 | 2 | 1
2 | 2 | 0
1 | 4 | 2
I want the job that does this rollup to run as a stream process, with new counts being populated as new events come in. My question is, where is the idiomatic place for the job to store the state for the current topic id and comment counts? Assuming that topics can live for years. Current ideas are:
Write a 'current' entry for the topic id to BigTable and in a DoFn query what the current comment count for the topic id is coming in. Even as I write this I'm not a fan.
Use side inputs somehow? It seems like maybe this is the answer, but if so I'm not totally understanding.
Set up a streaming job with a global window, with a trigger that goes off every time it gets a record, and rely on Dataflow to keep the entire pane history somewhere. (unbounded storage requirement?)
EDIT: Just to clarify, I wouldn't have any trouble implementing any of these three strategies, or a million different other ways of doing it, I'm more interested in what is the best way of doing it with Dataflow. What will be most resilient to failure, having to re-process history for a backfill, etc etc.
EDIT2: There is currently a bug with the dataflow service where updates fail if adding inputs to a flatten transformation, which will mean you'll need to discard and rebuild any state accrued in the job if you make a change to a job that includes adding something to a flatten operation.
You should be able to use triggers and a combine to accomplish this.
PCollection<ID> comments = /* IDs from the source */;
PCollection<KV<ID, Long>> commentCounts = comments
// Produce speculative results by triggering as data comes in.
// Note that this won't trigger after *every* element, but it will
// trigger relatively quickly (as the system divides incoming data
// into work units). You could also throttle this with something
// like:
// AfterProcessingTime.pastFirstElementInPane()
// .plusDelayOf(Duration.standardMinutes(5))
// which will produce output every 5 minutes
.apply(Window.triggering(
Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.accumulatingFiredPanes())
// Count the occurrences of each ID
.apply(Count.perElement());
// Produce an output String -- in your use case you'd want to produce
// a row and write it to the appropriate source
commentCounts.apply(new DoFn<KV<ID, Long>, String>() {
public void processElement(ProcessContext c) {
KV<ID, Long> element = c.element();
// This includes details about the pane of the window being
// processed, and including a strictly increasing index of the
// number of panes that have been produced for the key.
PaneInfo pane = c.pane();
return element.key() + " | " + pane.getIndex() + " | " + element.value();
}
});
Depending on your data, you could also read whole comments from the source, extract the ID, and then use Count.perKey() to get the counts for each ID. If you want a more complicated combination, you could look at defining a custom CombineFn and using Combine.perKey.
Since BigQuery does not support overwriting rows, one way to go about this is to write the events to BigQuery, and query the data using COUNT:
SELECT ID, COUNT(num_comments) from Table GROUP BY ID;
You can also do per-window aggregations of num_comments within Dataflow before writing the entries to BigQuery; the query above will continue to work.