Pytest on Flask based API - test by calling the remote API - flask

New to using Pytest on APIs. From my understanding, testing creates another instance of Flask. Additionally, from the tutorials I have seen, they also suggest to create a separate DB table instance to add, fetch and remove data for test purposes. However, I simply plan to use the remote api URL as host to simply make the call.
Now, I set my conftest like this, where the flag --testenv would indicate to make the get/post call on the host listed below:
import pytest
import subprocess
def pytest_addoption(parser):
"""Add option to pass --testenv=api_server to pytest cli command"""
parser.addoption(
"--testenv", action="store", default="exodemo", help="my option: type1 or type2"
)
#pytest.fixture(scope="module")
def testenv(request):
return request.config.getoption("--testenv")
#pytest.fixture(scope="module")
def testurl(testenv):
if testenv == 'api_server':
return 'http://api_url:5000/'
else:
return 'http://locahost:5000'
And my test file is written like this:
import pytest
from app import app
from flask import request
def test_nodes(app):
t_client = app.test_client()
truth = [
{
*body*
}
]
res = t_client.get('/topology/nodes')
print (res)
assert res.status_code == 200
assert truth == json.loads(res.get_data)
I run the code using this:
python3 -m pytest --testenv api_server
The thing I expect is that the test file would simply make a call to the remote api with the creds, fetch the data regardless of how it gets pulled in the remote code, and bring it here for assertion. However, I am getting the 400 BAD REQUEST error, with the error being like this:
assert 400 == 200
E + where 400 = <WrapperTestResponse streamed [400 BAD REQUEST]>.status_code
single_test.py:97: AssertionError
--------------------- Captured stdout call ----------------------
{"timestamp": "2022-07-28 22:11:14,032", "level": "ERROR", "func": "connect_to_mysql_db", "line": 23, "message": "Error connecting to the mysql database (2003, \"Can't connect to MySQL server on 'mysql' ([Errno -3] Temporary failure in name resolution)\")"}
<WrapperTestResponse streamed [400 BAD REQUEST]>
Does this mean that the test file is still trying to lookup the database locally for fetching? I am unable to figure out on which host are they sending the test url as well, so I am kind of stuck here. Looking to get some help around here.
Thanks.

Related

Dialogflow: Agent metadata not found for agentId

I'm trying to use Dialogflow's detect_intent in Python and I keep getting:
404 com.google.apps.framework.request.NotFoundException: Agent metadata not found for agentId: ####-####-####-####-####
Here's a snippet of my code:
import google.cloud.dialogflow as dialogflow
from CONFIG import DIALOGFLOW_PROJECT_ID
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = 'credentials/dialogflow.json'
def predict_intent(text, language):
session_client = dialogflow.SessionsClient()
session = session_client.session_path(DIALOGFLOW_PROJECT_ID, SESSION_ID)
text_input = dialogflow.TextInput(text=text, language_code=language)
query_input = dialogflow.QueryInput(text=text_input)
response = session_client.detect_intent(session=session, query_input=query_input) # ERROR
return response.query_result.intent.display_name
I tried running the function multiple times and some of them succeed, but most fall in the exception.
I can train the bot using the same interface and it works fine.
I'm using Python 3.7 and the following Google Cloud modules: google-api-core==2.0.1, google-auth==2.0.2, google-cloud-dialogflow==2.7.1, googleapis-common-protos==1.53.0.

How can I mock ECS with moto?

I want to create a mock ECS cluster, but it seems not to work properly. Although something is mocked (I don't get a credentials error), it seems not to "save" the cluster.
How can I create a mock cluster with moto?
MVCE
foo.py
import boto3
def print_clusters():
client = boto3.client("ecs")
print(client.list_clusters())
return client.list_clusters()["clusterArns"]
test_foo.py
import boto3
import pytest
from moto import mock_ecs
import foo
#pytest.fixture
def ecs_cluster():
with mock_ecs():
client = boto3.client("ecs", region_name="us-east-1")
response = client.create_cluster(clusterName="test_ecs_cluster")
yield client
def test_foo(ecs_cluster):
assert foo.print_clusters() == ["test_ecs_cluster"]
What happens
$ pytest test_foo.py
Test session starts (platform: linux, Python 3.8.1, pytest 5.3.5, pytest-sugar 0.9.2)
rootdir: /home/math/GitHub
plugins: black-0.3.8, mock-2.0.0, cov-2.8.1, mccabe-1.0, flake8-1.0.4, env-0.6.2, sugar-0.9.2, mypy-0.5.0
collecting ...
―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― test_foo ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
ecs_cluster = <botocore.client.ECS object at 0x7fe9b0c73580>
def test_foo(ecs_cluster):
> assert foo.print_clusters() == ["test_ecs_cluster"]
E AssertionError: assert [] == ['test_ecs_cluster']
E Right contains one more item: 'test_ecs_cluster'
E Use -v to get the full diff
test_foo.py:19: AssertionError
---------------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------------
{'clusterArns': [], 'ResponseMetadata': {'HTTPStatusCode': 200, 'HTTPHeaders': {'server': 'amazon.com'}, 'RetryAttempts': 0}}
test_foo.py ⨯
What I expected
I expected the list of cluster ARNs to have one element (not the one in the assert statement, but an ARN). But the list is empty.
When creating a cluster, you're using a mocked ECS client.
When listing the clusters, you're creating a new ECS client outside the scope of moto.
In other words, you're creating a cluster in memory - but then ask AWS itself for a list of clusters.
You could rewrite the foo-method to use the mocked ECS client:
def print_clusters(client):
print(client.list_clusters())
return client.list_clusters()["clusterArns"]
def test_foo(ecs_cluster):
assert foo.print_clusters(ecs_cluster) == ["test_ecs_cluster"]
def test_foo(ecs_cluster):
assert foo.print_clusters(ecs_cluster) == ["test_ecs_cluster"]
#This will cause you a bug but I have fixed this bug ..so the code looks like this:
def cluster_list(ecs_cluster):
assert ecs.print_clusters(ecs_cluster) == ['arn:aws:ecs:us-east-1:123456789012:cluster/test_ecs_cluster']
#Explination
So basically you have passed the incorrect assert values..assert foo.print_clusters(ecs_cluster) --> this containes cluster arns it is in the form of an array which is ['arn:aws:ecs:us-east-1:123456789012:cluster/test_ecs_cluster'] and you are trying to access the [1] index and testing if its == "test_ecs_cluster" which is wrong so instead to that pass the full arn just to test your code ..

"BrokenPipeError" with Flask and requests-toolbelt

I am trying to understand more precisely how Http connections work with Flask, so I tried writing a very simple app and another simple connection with requests and requests-toolbelt :
app = Flask('file-streamer')
#app.route("/uploadDumb", methods=["POST"])
def upload_dumb():
print("Hello")
return Response(status=200)
So basically this server should just receive a request and return a response.
Then I implemented a simple piece of code that sends requests with toolbelt :
import requests
from requests_toolbelt.multipart import encoder
values = {"file": ("test.zip", open("test.zip", "rb"), "application/zip"), "test": "hello"}
m = encoder.MultipartEncoder(fields=values)
r = requests.post(url="http://localhost:5000/uploadDumb", data=m, headers={"Content-Type": m.content_type})
The file I'm sending is a pretty large file that I want to upload with streaming.
The thing is, I expected the Flask server to wait for the whole file to be sent (even if the file is useless), then return a response, but that's not what's happening.
Actually, Flask responds at the very beginning of the sending process, returns a 200 response, which causes the 'requests' side to end with a "BrokenPipeError".
Could someone explain to me what is happening there ?

aws boto3 client Stubber help stubbing unit tests

I'm trying to write some unit tests for aws RDS. Currently, the start stop rds api calls have not yet been implemented in moto. I tried just mocking out boto3 but ran into all sorts of weird issues. I did some googling and found http://botocore.readthedocs.io/en/latest/reference/stubber.html
So I have tried to implement the example for rds but the code appears to be behaving like the normal client, even though I have stubbed it. Not sure what's going on or if I am stubbing correctly?
from LambdaRdsStartStop.lambda_function import lambda_handler
from LambdaRdsStartStop.lambda_function import AWS_REGION
def tests_turn_db_on_when_cw_event_matches_tag_value(self, mock_boto):
client = boto3.client('rds', AWS_REGION)
stubber = Stubber(client)
response = {u'DBInstances': [some copy pasted real data here], extra_info_about_call: extra_info}
stubber.add_response('describe_db_instances', response, {})
with stubber:
r = client.describe_db_instances()
lambda_handler({u'AutoStart': u'10:00:00+10:00/mon'}, 'context')
so the mocking WORKS for the first line inside the stubber and the value of r is returned as my stubbed data. When I try and go into my lambda_handler method inside my lambda_function.py and still use the stubbed client it behaves like a normal unstubbed client:
lambda_function.py
def lambda_handler(event, context):
rds_client = boto3.client('rds', region_name=AWS_REGION)
rds_instances = rds_client.describe_db_instances()
error output:
File "D:\dev\projects\virtual_envs\rds_sloth\lib\site-packages\botocore\auth.py", line 340, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials
You will need to patch boto3 where it is called in the routine that you will be testing. Also Stubber responses appear to be consumed on each call and thus will require another add_response for each stubbed call as below:
def tests_turn_db_on_when_cw_event_matches_tag_value(self, mock_boto):
client = boto3.client('rds', AWS_REGION)
stubber = Stubber(client)
# response data below should match aws documentation otherwise more errors due to botocore error handling
response = {u'DBInstances': [{'DBInstanceIdentifier': 'rds_response1'}, {'DBInstanceIdentifierrd': 'rds_response2'}]}
stubber.add_response('describe_db_instances', response, {})
stubber.add_response('describe_db_instances', response, {})
with mock.patch('lambda_handler.boto3') as mock_boto3:
with stubber:
r = client.describe_db_instances() # first_add_response consumed here
mock_boto3.client.return_value = client
response=lambda_handler({u'AutoStart': u'10:00:00+10:00/mon'}, 'context') # second_add_response would be consumed here
# asert.equal(r,response)

Django HTTP Response object for GAE Cron

I am doing this using Django / GAE / Python environment:
cron:
#run events every 12 hours
and
def events(request):
# read all records
# Do some processing on a few records
return http.HTTPResponseGone('Some Records are modified' )
Result in production :
Job runs on time with 'failed' message
However, it has done the job exactly on the datastore as required
No error log entry seen
Dev : No errors ; returns the message 'Some Records are modified'
Is it possible to avoid HTTP Response returned ? There is no need for HTTPResponse for me, however, I have kept this as Dev server testing fails in its absence. Can some one
help me to make the code clean?
Gone is error 410. You should return 200 Success if the operation succeeds. When you return HttpResponse, the default status is 200.