I am in the process of taking over a legacy system that contains a Console App project, which is run as a Scheduled Task on our production servers. It mainly produces daily and weekly reports, does some database maintenance, etc.
The "main" of the console app handles inputting the command line arguments and determining which of several different processes to execute. Something like
Module MainModule
Public Sub Main()
'--- Check if command line arguments were specified
Dim args() As String = Environment.GetCommandLineArgs()
If args.Length > 1 Then
ConsoleMain(args)
End If
End Sub
Public Sub ConsoleMain(ByVal args() As String)
Dim rc As New Coordinator(enableEmails_)
Try
Dim arg_ As String = args(1)
Dim success_ As Boolean = True
Select Case arg_.ToUpper()
Case TaskType.OrderIntegration
success_ = rc.OrderIntegration()
Case TaskType.Motivators
success_ = rc.MotivatorsCreateFile(New CustomerMotivatorsManager)
... repeat for each of the various "Task Types"
End Module
What my question is:
- This being a Console App with a Main() and ConsoleMain(), I don't seem to have anything that I can access from a Test - i.e. the "Main" and "ConsoleMain" do not appear to be accessible. How can I unit-test something like this, to test the "if argument 'x' is passed, function 'y' is called"?
Thanks in advance,
I'm not sure why Main wouldn't be visible from your tests, unless VB.NET does some behind-the-curtains stuff to hide it.
In any case, why not move your code into its own class(es)? Then you can run unit tests against each class at a time, rather than executing the whole thing at once.
Unit tests usually execute against individual classes, rather than executing the Main entry point of an app.
Related
The spring-cloud-function-deployer examples all show the deployed function being loaded on startup i.e. the ApplicationContext is started with the necessary properties, pointing at the packaged jar to load.
Is there a way to call the deployer programatically at runtime, instead of relying on the auto-configuration? In case I want to deploy the function sometime after the application context has started, or if I want to deploy multiple functions from the same jar etc.
Also is there a way to undeploy any loaded functions, or is this simple as removing the function from the catalog?
as stated in the GH response, you absolutely can deploy functions at runtime.
String[] args = new String[] {
"--spring.cloud.function.location=target/it/bootapp/target/bootapp-1.0.0.RELEASE-exec.jar",
"--spring.cloud.function.definition=uppercase" };
ApplicationContext context = SpringApplication.run(DeployerApplication.class, args);
FunctionCatalog catalog = context.getBean(FunctionCatalog.class);
Function<String, String> function = catalog.lookup("uppercase");
// use the function
You can see sample deployments here and the corresponding test.
I am new to Postman and running into a recurrent issue that I can’t figure out.
I am trying to run the same request multiple times using an array of data established on the Pre-request script, however, when I go to the runner the request is only running once, rather than 3 times.
Pre-request script:
var uuids = pm.environment.get(“uuids”);
if(!uuids) {
uuids= [“1eb253c6-8784”, “d3fb3ab3-4c57”, “d3fb3ab3-4c78”];
}
var currentuuid = uuids.shift();
pm.environment.set(“uuid”, currentuuid);
pm.environment.set(“uuids”, uuids);
Tests:
var uuids = pm.environment.get(“uuids”);
if (uuids && uuids.length>0) {
postman.setNextRequest(myurl/?userid={{uuid}});
} else {
postman.setNextRequest();
}
I have looked over regarding documentation and I cannot find what is wrong with my code.
Thanks!
Pre-request script is not a good way to test api with different data. Better use Postman runner for the same.
First, prepare a request with postman with variable data. For e.g
Then click to the Runner tab
Prepare csv file with data
uuids
1eb253c6-8784
d3fb3ab3-4c57
d3fb3ab3-4c78
And provide as data file, and run the sample.
It will allow you run the same api, multiple times with different data types and can check test cases.
You are so close! The issue is that you are not un-setting your environment variable for uuids, so it is an empty list at the start of each run. Simply add
pm.environment.unset("uuids") to your exit statement and it should run all three times. All specify the your next request should stop the execution by setting it to null.
So your new "Tests" will become:
var uuids = pm.environment.get("uuids");
if (uuids && uuids.length>0) {
postman.setNextRequest(myurl/?userid={{uuid}});
} else {
postman.setNextRequest(null);
pm.environment.unset("uuids")
}
It seems as though the Runner tab has been removed now?
For generating 'real' data, I found this video a great help: Creating A Runner in Postman-API Testing
Sending 1000 responses to the db to simulate real usage has saved a lot of time!
I'm doing research on unit testing in PLSQL. I set up a test database with some tables and packages with functions and procedures. Currently I'm giving the test framework 'utPLSQL' a try but stumbled upon an error when testing on a ref cursor. I can run all of my tests but the result of the test on the ref cursor says "ora-01031 insufficient privileges", that's all I get. How can I find the source of this error? Or does anyone encountered the same problem? The installation of utPLSQL was successful and all the other functionality of the test framework works.
This is the procedure I want to test:
FUNCTION F_Get_Customers_RefCurs(P_LASTNAME IN VARCHAR2)
RETURN cust_refcur
IS
cust_result cust_refcur;
BEGIN
OPEN cust_result FOR
SELECT *
FROM CUSTOMERS
WHERE LASTNAME = P_LASTNAME
ORDER BY email ASC;
return(cust_result);
END F_Get_Customers_RefCurs;
I have declared cust_refcur in the spec of the package which contains my function as following:
TYPE cust_refcur IS REF CURSOR;
And this is the test:
PROCEDURE ut_F_Get_Customers_RefCurs
IS
params utplsql_util.utplsql_params;
BEGIN
utPLSQL_Util.reg_In_Param (1,
'Tester',
params);
UTASSERT.eq_refc_query ('Get customers on last name is successful (refcursor)',
'PK_ORDERS.F_GET_CUSTOMERS_REFCURS',
params,
0,
'SELECT customerid, firstname, lastname, email, password
FROM CUSTOMERS
WHERE LASTNAME = ''Tester''
ORDER BY email ASC');
END;
I tried getting your example to work, but unfortunately, I got weird errors from utPLSQL.
Since the last version of utPLSQL on Sourceforge is from 2005 and Steven Feuerstein is now working on a commercial product that essentially does the same, I'd recommend looking into other solutions for unit testing your PL/SQL code - some links:
Oracle SQL Developer has some built-in unit test-functionality, and it's free
there's also Quest code tester (this one's commercial)
maybe cause by 'RETURN cust_refcur',the type use by utplsql is store in varchar2(10) ,
try 'return refcur' ?
-zhaozb
When running this assertion (eq_refc_query), utPLSQL needs to temporarily create a table. It does this by using EXECUTE IMMEDIATE which requires that the user have the CREATE TABLE privilege granted to them directly, rather than via a role.
[Full disclosure: I am one of the administrators of the utPLSQL project]
How can i write unitintegration tests that talk to a database. e.g.:
public int GetAppLockCount(DbConnection connection)
{
string query :=
"SELECT"+CRLF+
" tl.resource_type AS ResourceType,"+CRLF+
" tl.resource_description AS ResourceName,"+CRLF+
" tl.request_session_id AS spid"+CRLF+
"FROM sys.dm_tran_locks tl"+CRLF+
"WHERE tl.resource_type = 'APPLICATION'"+CRLF+
"AND tl.resource_database_id = ("+CRLF+
" SELECT dbid"+CRLF+
" FROM master.dbo.sysprocesses"+CRLF+
" WHERE spid = ##spid)";
IRecordset rdr = Connection.Execute(query);
int nCount = 0;
while not rdr.EOF do
{
nCount := nCount+1;
rdr.Next;
}
return nCount;
}
In this case i am trying to exorcise this code of bugs (the IRecordset returns empty recordset).
[UnitTest]
void TestGetLockCountShouldAlwaysSucceed();
{
DbConnection conn = GetConnectionForUnit_IMean_IntegrationTest();
GetAppLockCount(conn);
CheckTrue(True, "We should reach here, whether there are app locks or not");
}
Now all i need is a way to connect to some database when running a unit integration testing.
Do people store connection strings somewhere for the test-runner to find? A .ini or .xml or .config file?
Note: Language/framework agnostic. The code intentionally contains elements from:
C#
Delphi
ADO.net
ADO
NUnit
DUnit
in order to drive that point home.
Now all i need is a way to connect to some database when running a unit integration testing.
Either use an existing database or an in-memory database. I've tried both an currently use an existing database that is splatted and rebuilt using Liquibase scripts in an ant file.
Advantages to in-memory - no dependencies on other applications.
Disadvantages - Not quite as real, can take time to start up.
Advantages to real database - Can be identical to the real world
Disadvantages - Requires access to a 3rd party machine. More work setting up a new user (i.e. create new database)
Do people store connection strings somewhere for the test-runner to find? A .ini or .xml or .config file?
Yeap. In C# I used a .config file, in java a .props file. With in-memory you can throw this into the version control as it will be the same for each person, with a real database running somewhere it will need to be different for each user.
You will also need to consider seed data. In Java I've used dbUnit in the past. Not the most readable, but works. Now I use a Ruby ActiveRecord task.
How do you start this? First can you rebuild your database? You need to be able to automate this before going to far down this road.
Next you should build up a blank local database for your tests. I go with one-per-developer, some other teams share but don't commit. In a .NET/MS SQL world I think in memory would be quite easy to do.
I've been out of touch with TDD for some time, and am very rusty. I'd appreciate some suggestions on how to TDD the method described below.
The method must find a web.config file in a user supplied application directory. It must then extract and return a connection string from that config file.
It returns null if for any reason the connection string isn't found, be it a bad path, no web.config, or no connection string in web.config.
My initial thoughts are to write a test with setup that creates a directory and writes a web.config file with a connection string. The test would then call my method with the created path and expect a non-null value back, and my initial test run would fail because my method stub always returns null.
Then, implement the method, and run the test expecting a pass. Then, as a pre-test (I forget the term), delete the created directory, and call the method expecting a null value.
First, I wouldn't have the method both find the file and extract the connection string. If your framework doesn't already have a method to determine if a file exists in a given directory, write a method to do then, once you have a file, write a method to extract the connection string from an open stream. For testing, then, you could supply a memory stream instead of having to actually create a directory and file.
Second, if you aren't depending on a failed compile being your first failing test, then write your first attempt at the method to throw a NotImplementedException. It's a small step, but when you write your first test, at least it will fail. Of course, the first test on an empty stream will expect it to return null and the first code you write will be return null, but that's ok. Your next test will force you to change it. Continue on from there until you've got your completed methods.
You appear to have several TestCases with several distinct setUp fixtures.
The FoundDirectory TestCase. The setUp creates the expected, valid file.
This can have several subclasses.
A connection string not found TestCase. The setUp creates the expected, but invalid file.
A bad path TestCase. The setUp creates the expected, but invalid file.
A no web.config TestCase. The setUp creates the expected, but invalid file.
A no connection string in web.config TestCase. The setUp creates the expected, but invalid file.
The DidntFindDirectory TestCase. The setUp assures that the directory doesn't exist.
The DidntFindFile TestCase. The setUp creates the directory but no file.
Make the object that hold you method (or the method itself) dependent on a IConfigLoader of some sort, that you would be able to mock :
public interface IConfigLoader
{
XmlReader LoadAppConfigFrom(string path);
}
Use it from your method to get the XML file you want to parse.
I suggest that the story in your question mixes several issues:
finding and opening a file,
loading data into a "configuration" (however represented)
attempting to get a specific parameter from a "configuration"
Point 3 is now a matter of how a Configuration behaves, and can be developed in TDD fashion.
Point 2 is now a matter of how a Configuration is constructed (e.g by a ConfigurationLoader), and can be developed in TDD fashion (e.g. against a StringReader).
Point 1 is now a matter of whether you can open a Reader for a specified file path. It is easy to add after completing point 2.