running flow through code Django-Viewflow - django

I want to run my process entirely by code. I've managed to start a process but a I can't run the next part of the process. I tried using the "flow.Function" and calling my desired function but i't says
'Function {} should be called with task instance', 'execute'"
and the documentation on this subject is'nt very clear.
flows.py
#flow.flow_start_func
def create_flow(activation, campos_proceso, **kwargs):
activation.process.asignador = campos_proceso['asignador']
activation.process.ejecutor = campos_proceso['ejecutor']
activation.process.tipo_de_flujo = campos_proceso['tipo_de_flujo']
activation.process.estado_del_entregable = campos_proceso[
'estado_del_entregable']
activation.process.save()
activation.prepare()
activation.done()
return activation
#flow.flow_func
def exec_flow(activation, process_fields, **kwargs):
activation.process.revisor = process_fields['revisor']
activation.process.save()
activation.prepare()
activation.done()
return activation
#frontend.register
class Delivery_flow(Flow):
process_class = DeliveryProcess
start = flow.StartFunction(create_flow).Next(this.execute)
execute = flow.Function(exec_flow).Next(this.end)
end = flow.End()
views.py
def Execute(request): #campos_ejecucion, request):
campos_ejecucion = {
'ejecutor':request.user,
'revisor':request.user,
'observaciones_ejecutor':'Este es un puente magico',
'url_ejecucion':'https://www.youtube.com/watch?v=G-yNGb0Q91Y',
}
campos_proceso = {
'revisor':campos_ejecucion['revisor']
}
flows.Delivery_flow.execute.run()
Entregable.objects.crear_entregable()
return render(request, "Flujo/landing.html")

Generally, running "entirely by code" is the antipattern and should be avoided. Flow class is the set of views bound with URLs, so it's like class-based URL config, you don't need to have one more addition separate view and URL entry.
For custom views, you can take a look at the cookbook sample - https://github.com/viewflow/cookbook/blob/master/custom_views/demo/bloodtest/views.py
As for the actual question, you have missed task_loader. Function node should figure out what task is actually executed. You can do it on the flow layer (with task_loader) or directly get the Task model instance and pass it as the function parameter - http://docs.viewflow.io/viewflow_flow_nodes.html#viewflow.flow.Function

Related

How to schedule a celery task without blocking Django

I have a Django service that register lot of clients and render a payload containing a timer (lets say 800s) after which the client should be suspended by the service (Change status REGISTERED to SUSPENDED in MongoDB)
I'm running celery with rabbitmq as broker as follows:
celery/tasks.py
#app.task(bind=True, name='suspend_nf')
def suspend_nf(pk):
collection.update_one({'instanceId': str(pk)},
{'$set': {'nfStatus': 'SUSPENDED'}})
and calling the task inside Django view like:
api/views.py
def put(self, request, pk):
now = datetime.datetime.now(tz=pytz.timezone(TIME_ZONE))
timer = now + datetime.timedelta(seconds=response_data["heartBeatTimer"])
suspend_nf.apply_async(eta=timer)
response = Response(data=response_data, status=status.HTTP_202_ACCEPTED)
response['Location'] = str(request.build_absolute_uri())
What am I missing here?
Are you asking that your view blocks totally or view is waiting the "ETA" to complete the execution?
Did you receive any error?
Try using countdown parameter instead of eta.
In your case it's better because you don't need to manipulate dates.
Like this: suspend_nf.apply_async(countdown=response_data["heartBeatTimer"])
Let's see if your view will have some different behavior.
I have finally find a work around, since working on a small project, I don't really need Celery + rabbitmq a simple Threading does the job.
Task look like this :
def suspend_nf(pk, timer):
time.sleep(timer)
collection.update_one({'instanceId': str(pk)},
{'$set': {'nfStatus': 'SUSPENDED'}})
And calling inside the view like :
timer = int(response_data["heartBeatTimer"])
thread = threading.Thread(target=suspend_nf, args=(pk, timer), kwargs={})
thread.setDaemon(True)
thread.start()

Missing matching query when writing custom views

Because of some form manipulations, I had to write custom views and followed the example in the cookbook. When writing in my view
if request.POST:
if includeHelper.check_valid():
process = includeHelper.save()
request.activation.process = process
request.activation.done()
return redirect(get_next_task_url(request, request.activation.process))
I get a "Matching Query does not exist" error. I first thought my includeHelper, which is just a class managing formsets etc., returns a process that can not be saved due to some error in my code. However, when I skip the part that involves request.activation
if request.POST:
if includeHelper.check_valid():
process = includeHelper.save()
return HttpResponse("ok")
it works. Any ideas?
The activation.process and activation.task are instantiated in the #flow_view and #flow_start_view decorators
So you can't just do request.activation.process = process to substitute the only process reference.
You can modify request.activation.process in place and call activation.done() at the end.

How to start a new request after the item_scraped scrapy signal is called?

I need to scrap the data of each item from a website using Scrapy(http://example.com/itemview). I have a list of itemID and I need to pass it in a form in example.com.
There is no url change for each item. So for each request in my spider the url will always be the same. But the content will be different.
I don't wan't a for loop for handling each request. So i followed the below mentioned steps.
started spider with the above url
added item_scraped and spider_closed signals
passed through several functions
passed the scraped data to pipeline
trigerred the item_scraped signal
After this it automatically calls the spider_closed signal. But I want the above steps to be continued till the total itemID are finished.
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
itemIDs = [11111,22222,33333]
current_item_num = 0
def __init__(self, itemids=None, *args, **kwargs):
super(ExampleSpider, self).__init__(*args, **kwargs)
dispatcher.connect(self.item_scraped, signals.item_scraped)
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_closed(self, spider):
self.driver.quit()
def start_requests(self):
request = self.make_requests_from_url('http://example.com/itemview')
yield request
def parse(self,response):
self.driver = webdriver.PhantomJS()
self.driver.get(response.url)
first_data = self.driver.find_element_by_xpath('//div[#id="itemview"]').text.strip()
yield Request(response.url,meta={'first_data':first_data},callback=self.processDetails,dont_filter=True)
def processDetails(self,response):
itemID = self.itemIDs[self.current_item_num]
..form submission with the current itemID goes here...
...the content of the page is updated with the given itemID...
yield Request(response.url,meta={'first_data':response.meta['first_data']},callback=self.processData,dont_filter=True)
def processData(self,response):
...some more scraping goes here...
item = ExamplecrawlerItem()
item['first_data'] = response.meta['first_data']
yield item
def item_scraped(self,item,response,spider):
self.current_item_num += 1
#i need to call the processDetails function here for the next itemID
#and the process needs to contine till the itemID finishes
self.parse(response)
My piepline:
class ExampleDBPipeline(object):
def process_item(self, item, spider):
MYCOLLECTION.insert(dict(item))
return
I wish I had an elegant solution to this. But instead it's a hackish way of calling the underlying classes.
self.crawler.engine.slot.scheduler.enqueue_request(scrapy.Request(url,self.yourCallBack))
However, you can yield a request after you yield the item and have it callback to self.processDetails. Simply add this to your processData function:
yield item
self.counter += 1
yield scrapy.Request(response.url,callback=self.processDetails,dont_filter=True, meta = {"your":"Dictionary"}
Also, PhantomJS can be nice and make your life easy, but it is slower than regular connections. If possible, find the request for json data or whatever makes the page unparseable without JS. To do so, open up chrome, right click, click inspect, go to the network tab, then enter the ID into the form, then look at the XHR or JS tabs for a JSON that has the data or next url you want. Most of the time, there will be some url made by adding the ID, if you can find it, you can just concatenate your urls and call that directly without having the cost of JS rendering. Sometimes it is randomized, or not there, but I've had fair success with it. You can then also use that to yield many requests at the same time without having to worry about phantomJS trying to do two things at once or having to initialize many instances of it. You could use tabs, but that is a pain.
Also, I would use a Queue of your IDs to ensure thread safety. Otherwise, you could have processDetails called twice on the same ID, though in the logic of your program everything seems to go linearly, which means you aren't using the concurrency capabilities of Scrapy and your program will go more slowly. To use Queue add:
import Queue
#go inside class definition and add
itemIDQueue = Queue.Queue()
#within __init__ add
[self.itemIDQueue.put(ID) for ID in self.itemID]
#within processDetails replace itemID = self.itemIDs[self.current_item_num] with
itemID = self.itemIDQueue.get()
And then there is no need to increment the counter and your program is thread safe.

TDD Django tests seem to skip certain parts of the view code

I'm writing some tests for a site using django TDD.
The problem is that when I manually go to the testserver. Fill in the form and submit it then it seems to works fine. But when I run the test using manage.py test wiki it seems to skip parts of the code within the view. The page parts all seem to work fine. But the pagemod-parts within the code and even a write() I created just to see what was going on seems to be ignored.
I have no idea what could be causing this and can't seem to find a solution. Any ideas?
This is the code:
test.py
#imports
class WikiSiteTest(LiveServerTestCase):
....
def test_wiki_links(self):
'''Go to the site, and check a few links'''
#creating a few objects which will be used later
.....
#some code to get to where I want:
.....
#testing the link to see if the tester can add pages
link = self.browser.find_element_by_link_text('Add page (for testing only. delete this later)')
link.click()
#filling in the form
template_field = self.browser.find_element_by_name('template')
template_field.send_keys('homepage')
slug_field = self.browser.find_element_by_name('slug')
slug_field.send_keys('this-is-a-slug')
title_field = self.browser.find_element_by_name('title')
title_field.send_keys('this is a title')
meta_field = self.browser.find_element_by_name('meta_description')
meta_field.send_keys('this is a meta')
content_field = self.browser.find_element_by_name('content')
content_field.send_keys('this is content')
#submitting the filled form so that it can be processed
s_button = self.browser.find_element_by_css_selector("input[value='Submit']")
s_button.click()
# now the view is called
and a view:
views.py
def page_add(request):
'''This function does one of these 3 things:
- Prepares an empty form
- Checks the formdata it got. If its ok then it will save it and create and save
a copy in the form of a Pagemodification.
- Checks the formdata it got. If its not ok then it will redirect the user back'''
.....
if request.method == 'POST':
form = PageForm(request.POST)
if form.is_valid():
user = request.user.get_profile()
page = form.save(commit=False)
page.partner = user.partner
page.save() #works
#Gets ignored
pagemod = PageModification()
pagemod.template = page.template
pagemod.parent = page.parent
pagemod.page = Page.objects.get(slug=page.slug)
pagemod.title = page.title
pagemod.meta_description = page.meta_description
pagemod.content = page.content
pagemod.author = request.user.get_profile()
pagemod.save()
f = open("/location/log.txt", "w", True)
f.write('are you reaching this line?')
f.close()
#/gets ignored
#a render to response
Then later I do:
test.py
print '###############Data check##################'
print Page.objects.all()
print PageModification.objects.all()
print '###############End data check##############'
And get:
terminal:
###############Data check##################
[<Page: this is a title 2012-10-01 14:39:21.739966+00:00>]
[]
###############End data check##############
All the imports are fine. Putting the page.save() after the ignored code makes no difference.
This only happens when running it through the TDD test.
Thanks in advance.
How very strange. Could it be that the view is somehow erroring at the Pagemodification stage? Have you got any checks later on in your test that assert that the response from the view is coming through correctly, ie that a 500 error is not being returned instead?
Now this was a long time ago.
It was solved but the solution was a little embarrassing. Basically, it was me being stupid. I can't remember the exact details but I believe a different view was called instead of the one that I showed here. That view had the same code except the "skipped" part.
My apologies to anyone who took their time looking into this.

Django: Passing a request directly (inline) to a second view

I'm trying to call a view directly from another (if this is at all possible). I have a view:
def product_add(request, order_id=None):
# Works. Handles a normal POST check and form submission and redirects
# to another page if the form is properly validated.
Then I have a 2nd view, that queries the DB for the product data and should call the first one.
def product_copy_from_history(request, order_id=None, product_id=None):
product = Product.objects.get(owner=request.user, pk=product_id)
# I need to somehow setup a form with the product data so that the first
# view thinks it gets a post request.
2nd_response = product_add(request, order_id)
return 2nd_response
Since the second one needs to add the product as the first view does it I was wondering if I could just call the first view from the second one.
What I'm aiming for is just passing through the request object to the second view and return the obtained response object in turn back to the client.
Any help greatly appreciated, critism as well if this is a bad way to do it. But then some pointers .. to avoid DRY-ing.
Thanx!
Gerard.
My god, what was I thinking. This would be the cleanest solution ofcourse:
def product_add_from_history(request, order_id=None, product_id=None):
""" Add existing product to current order
"""
order = get_object_or_404(Order, pk=order_id, owner=request.user)
product = Product.objects.get(owner=request.user, pk=product_id)
newproduct = Product(
owner=request.user,
order = order,
name = product.name,
amount = product.amount,
unit_price = product.unit_price,
)
newproduct.save()
return HttpResponseRedirect(reverse('order-detail', args=[order_id]) )
A view is a regular python method, you can of course call one from another giving you pass proper arguments and handle the result correctly (like 404...). Now if it is a good practice I don't know. I would myself to an utiliy method and call it from both views.
If you are fine with the overhead of calling your API through HTTP you can use urllib to post a request to your product_add request handler.
As far as I know this could add some troubles if you develop with the dev server that comes with django, as it only handles one request at a time and will block indefinitely (see trac, google groups).