Let me use the following to explain my question:
In my flask app, the url '/long_time_task' produces a result based on the output of foo(). Foo() will take long time to run.
I'd like to know:
when foo() is running (ie. url was clicked), but not finished.
1> If user refreshes the link, it will start another foo(). Will both results eventually (be saved in db and) visible at url /show_db_result (if no error)?
2> If user goes to other links, or log out. Will result eventually (be saved in db and) visible at url /show_db_result (if no error)?
I've tested the application myself. And all seem to work fine (ie yes to both questions) without celery or any other MQ support.
#app.route('/long_time_task')
def long_time_task():
if session.get('logged_in'):
foo()
return 'some results'
return 'please log in'
def foo():
# also require log in
# time consuming task, print('calculating')
# when finished, save result to a database
return 'msg'
#app.route('/page1')
def page1():
return 'page1'
#app.route('/page2')
def page2():
return 'page2'
#app.route('/show_db_result')
def show_db_result():
# require log in to see user's result
return 'all foo()\'s result row by row'
I understand that there are many articles about flask long_time_task and celery. I'd like to know more about the basic machenisms.
I am learning Flask in my Internal Server, the problem where I stuck is that why do I even need to pass an argument in the user() method to not get any Internal Server Error problem
CheckOut the below code...
from flask import Flask
app = Flask(__name__)
#app.route("/<name>")
def user():
return "hallo moto"
app.run();
I just wanted that in the search bar if the user types anything after "/" in url, user() method should run and "hallo moto" output should get printed but it shows Internal Server Problem
When using variable rules the decorated function must accept the corresponding keyword argument:
#app.route("/<name>")
def user(name):
return "hallo moto"
I'm trying to write a test suite for verifying the state of some servers using testinfra.
It's my first time working with python/testinfra/pytest.
As a brief pseudocodey example
test_files.py
testinfra_hosts=[server1,server2,server3]
with open("tests/microservices_default_params.yml", "r") as f:
try:
default_params = yaml.safe_load(f)
except yaml.YAMLError as exc:
print(exc)
with open("tests/" + server + "/params/" + server + "_params.yml", "r") as f:
try:
instance_params = yaml.safe_load(f)
except yaml.YAMLError as exc:
print(exc)
#pytest.mark.parametrize(
"name", [] + default_params["files"] + instance_params["files"]
)
def test_files(host, name):
file = host.file(name)
assert file.exists
Each server has it's own unique params yaml file.
I want every server to go through the same test, however I need each server to run the test with its own parametrized values from its respective .yml file.
The problem with the code above, is that it will try to execute all of server1s unique params against both server 2 and 3, then will start again with server 2 being run against servers 1-3 unique params.
I can't find a clean way to essentially have the test run once with server1 as the host, and server 1 params, then do the same again with server2 and server2 params etc.
I've tried using for loops within the test file itself, reading each instance_params.yml into a dictionary with the key being the server name and value containing all of that servers params - but that doesn't smell very good and because the assert is inside the loop, if one of the params for that server fails the loop exits and doesn't attempt any further params for that server.
I've looked into pytest_collection_modifyitems but I can't quite get my head around how to let it do what I want. I feel like there may be an easy solution to this that I'm missing.
My last resort would be to seperate out the tests and parametrized params individually as
#pytest.mark.parametrize(
"server1_params", instance_params['server1']['files]
)
def_test_files_server1(host, server1_params):
...
#pytest.mark.parametrize(
"server2_params", instance_params['server2']['files]
)
def_test_files_server2(host,server2_params):
...
That approach doesn't sound right to me though.
Any help for a fresh junior would be appreciated, I've never asked anything here before
Hope it makes sense :)
Update:
#ajk Found the solution!
The pytest_generate_tests function was exactly what I was needing - and I've furthered my understanding of pytest along the way.
Thanks ajk! I owe you one :D
That's a fine question, and it looks like you've come at it from a couple different angles already. I'm no expert myself, but there are a few different ways I can think to do something like this in pytest. They all involve handing the heavy lifting over to fixtures. So here's a quick breakdown of one way to adapt the code you've already shared to use some fixtures:
Default Parameters
It looks like you've got some parameters that are not host-specific. It likely makes sense to pull those into a session-scoped fixture so you can reuse the same values across many hosts:
#pytest.fixture(scope="session")
def default_params():
with open("tests/microservices_default_params.yml", "r") as f:
try:
return yaml.safe_load(f)
except yaml.YAMLError as exc:
print(exc)
That way, you can add default_params as an argument to any test function that needs those values.
Host-specific Parameters
You certainly could load these parameters as you were doing before, or put some lookup logic in the test itself. That may be the best and clearest approach! Another option is to have a parametrized fixture whose value varies by instance:
#pytest.fixture(scope="function", params=testinfra_hosts)
def instance_params(request):
with open(f"tests/{request.param}/params/{request.param}_params.yml", "r") as f:
try:
return request.param, yaml.safe_load(f)
except yaml.YAMLError as exc:
print(exc)
Now, if we add instance_params as an argument to a test function it will run the test once for each entry in testinfra_hosts. The fixture will return a new value each time, based on the active host.
Writing the Test
If we farm the heavy lifting out to fixtures, the actual test can become simpler:
def test_files(default_params, instance_params):
hostname, params = instance_params
merged_files = default_params["files"] + params["files"]
print(f"""
Host: {hostname}
Files: {merged_files}
""")
(I'm not even asserting anything here, just playing with the fixtures to make sure they're doing what I think they're doing)
Taking it for a spin
I tried this out locally with the following sample yaml files:
tests/microservices_default_params.yml
files:
- common_file1.txt
- common_file2.txt
tests/server1/params/server1_params.yml
files:
- server1_file.txt
tests/server2/params/server2_params.yml
files:
- server2_file.txt
tests/server3/params/server3_params.yml
files:
- server3_file.txt
- server3_file2.txt
Running the test file produces:
test_infra.py::test_files[server1]
Host: server1
Files: ['common_file1.txt', 'common_file2.txt', 'server1_file.txt']
PASSED
test_infra.py::test_files[server2]
Host: server2
Files: ['common_file1.txt', 'common_file2.txt', 'server2_file.txt']
PASSED
test_infra.py::test_files[server3]
Host: server3
Files: ['common_file1.txt', 'common_file2.txt', 'server3_file.txt', 'server3_file2.txt']
PASSED
Which seems to be at least the general direction you were shooting for. I hope some of that is useful - good luck and happy testing!
Update
The comment below asks about breaking this out so each file from the list generates its own test. I'm not sure the best way to do that, but a couple options might be:
Building a flat list of server/filename pairs and feeding that into #pytest.mark.parametrize
Using pytest_generate_tests to set up dynamic parametrization, similar to what's described here.
In either case, you could start with something like this:
import pytest
import yaml
testinfra_hosts = ["server1", "server2", "server3"]
def get_default_params():
with open("tests/microservices_default_params.yml", "r") as f:
try:
return yaml.safe_load(f)
except yaml.YAMLError as exc:
print(exc)
def get_instance_params(instance):
with open(f"tests/{instance}/params/{instance}_params.yml", "r") as f:
try:
return yaml.safe_load(f)
except yaml.YAMLError as exc:
print(exc)
To use #pytest.mark.parametrize, you could follow that up with:
def get_instance_files(instances):
default_files = get_default_params()["files"]
for instance in instances:
instance_files = default_files + get_instance_params(instance)["files"]
for filename in instance_files:
yield (instance, filename)
#pytest.mark.parametrize("instance_file", get_instance_files(testinfra_hosts))
def test_files(instance_file):
hostname, filename = instance_file
print(
f"""
Host: {hostname}
Files: {filename}
"""
)
Or to take the pytest_generate_tests approach, you could do this instead:
def pytest_generate_tests(metafunc):
if "instance_file" in metafunc.fixturenames:
default_files = get_default_params()["files"]
params = [
(host, filename)
for host in testinfra_hosts
for filename in (
default_files + get_instance_params(host)["files"]
)
]
metafunc.parametrize(
"instance_file", params, ids=["_".join(param) for param in params]
)
def test_files(instance_file):
hostname, filename = instance_file
print(
f"""
Host: {hostname}
Files: {filename}
"""
)
Either way could work, and I suspect more experienced folks might package the pytest_generate_tests version up into a class and clean up the logic a bit. We have to start somewhere though, eh?
I am trying to write unit test case for an external facing api of my Django application. I have a model called Dummy with two fields temp and content. The following function is called by third party to fetch the content field. temp is an indexed unique key.
#csrf_exempt
def fetch_dummy_content(request):
try:
temp = request.GET.get("temp")
dummy_obj = Dummy.objects.get(temp=temp)
except Dummy.DoesNotExist:
content = 'Object not found.'
else:
content = dummy_obj.content
return HttpResponse(content, content_type='text/plain')
I have the following unit test case.
def test_dummy_content(self):
params = {
'temp': 'abc'
}
dummy_obj = mommy.make(
'Dummy',
temp='abc',
content='Hello World'
)
response = self.client.get(
'/fetch_dummy_content/',
params=params
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, 'Hello World')
Every time I run the test case, it goes into exception and returns Object not found. instead of Hello World. Uponn further debugging I found that temp from request object inside view function is always None, even though I am passing it in params.
I might be missing something and not able to figure out. What's the proper way to test these kind of functions.
There's no params parameter for the get or any of the other functions on the client, you're probably thinking of data.
response = self.client.get(
'/fetch_dummy_content/',
data=params
)
It's the second argument anyway, so you can just do self.client.get('/fetch_dummy_content/', params) too.
Any unknown parameters get included in the environment which explains why you were not getting an error for using the wrong name.
I'm writing some tests for a site using django TDD.
The problem is that when I manually go to the testserver. Fill in the form and submit it then it seems to works fine. But when I run the test using manage.py test wiki it seems to skip parts of the code within the view. The page parts all seem to work fine. But the pagemod-parts within the code and even a write() I created just to see what was going on seems to be ignored.
I have no idea what could be causing this and can't seem to find a solution. Any ideas?
This is the code:
test.py
#imports
class WikiSiteTest(LiveServerTestCase):
....
def test_wiki_links(self):
'''Go to the site, and check a few links'''
#creating a few objects which will be used later
.....
#some code to get to where I want:
.....
#testing the link to see if the tester can add pages
link = self.browser.find_element_by_link_text('Add page (for testing only. delete this later)')
link.click()
#filling in the form
template_field = self.browser.find_element_by_name('template')
template_field.send_keys('homepage')
slug_field = self.browser.find_element_by_name('slug')
slug_field.send_keys('this-is-a-slug')
title_field = self.browser.find_element_by_name('title')
title_field.send_keys('this is a title')
meta_field = self.browser.find_element_by_name('meta_description')
meta_field.send_keys('this is a meta')
content_field = self.browser.find_element_by_name('content')
content_field.send_keys('this is content')
#submitting the filled form so that it can be processed
s_button = self.browser.find_element_by_css_selector("input[value='Submit']")
s_button.click()
# now the view is called
and a view:
views.py
def page_add(request):
'''This function does one of these 3 things:
- Prepares an empty form
- Checks the formdata it got. If its ok then it will save it and create and save
a copy in the form of a Pagemodification.
- Checks the formdata it got. If its not ok then it will redirect the user back'''
.....
if request.method == 'POST':
form = PageForm(request.POST)
if form.is_valid():
user = request.user.get_profile()
page = form.save(commit=False)
page.partner = user.partner
page.save() #works
#Gets ignored
pagemod = PageModification()
pagemod.template = page.template
pagemod.parent = page.parent
pagemod.page = Page.objects.get(slug=page.slug)
pagemod.title = page.title
pagemod.meta_description = page.meta_description
pagemod.content = page.content
pagemod.author = request.user.get_profile()
pagemod.save()
f = open("/location/log.txt", "w", True)
f.write('are you reaching this line?')
f.close()
#/gets ignored
#a render to response
Then later I do:
test.py
print '###############Data check##################'
print Page.objects.all()
print PageModification.objects.all()
print '###############End data check##############'
And get:
terminal:
###############Data check##################
[<Page: this is a title 2012-10-01 14:39:21.739966+00:00>]
[]
###############End data check##############
All the imports are fine. Putting the page.save() after the ignored code makes no difference.
This only happens when running it through the TDD test.
Thanks in advance.
How very strange. Could it be that the view is somehow erroring at the Pagemodification stage? Have you got any checks later on in your test that assert that the response from the view is coming through correctly, ie that a 500 error is not being returned instead?
Now this was a long time ago.
It was solved but the solution was a little embarrassing. Basically, it was me being stupid. I can't remember the exact details but I believe a different view was called instead of the one that I showed here. That view had the same code except the "skipped" part.
My apologies to anyone who took their time looking into this.