POP3/IMAP server for unit testing - unit-testing

I'm looking for a simple POP3 and/or IMAP server for unit testing my application.
Requirements are:
no root privileges required to make it fully functional,
may store it's data in whichever directory I choose,
compliant to RFCs,
possibility to add e-mails by hand.
I've tried Dovecot, but it seems too complicated and running it without special system account is fairly impossible.
I know Mozilla should have one for Thunderbird testing, but only one I have found was for newsgroups.

Why not use (or create) a mock server and use that to test the functionality? This will return the correct responses to the various commands so you can be sure that your code will work correctly when you connect it to a real server.
That way you're not reliant on a 3rd party service for this aspect of your testing.

http://quintanasoft.com/dumbster/
http://www.icegreen.com/greenmail/
And probably many more. You start them in Your test so You don't have to create any system accounts.

Related

How to use gssapi kerberos in c / c++ client server cross-platform programs? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I had to "sporadically" work with Heimdal / MIT Gssapi for kerberos authentication over past couple of years. I had to build an application that was to be used as a web-service running on a Linux box, and serve client applications like browsers, running on Windows and/or Linux Desktops and Workstations. Surely not the easiest of beasts to tame. Eventually when summarizing my work, I could record that the difficulties emanated due to challenges in multiple dimensions. Getting started with gssapi programming is truly a challenge just because of poor documentation, and practically non-existant tutorials. Googling mostly results in either some theoretical discussion on what's kerberos, or leads to content written with presumption that you already know everything besides some particular semantic issue.
Some really good hacks around here contributed to help me, I therefore suppose it would be a good idea to summarize the stuff, from a developer's perspective, and share it here as some sort of a wiki, to give something back to this fantastic place, and fellow programmers.
Haven't really done a wiki like this before, and I am surely no authority on GSSAPI nor Kerberos, so please be kind, but more than that please contribute and correct my mistakes. Site Editors, I am counting on you to do your magic ;)
Getting your project completed successfully will require 3 specific things to be done correctly:
Setup of your test environment
Setup of your libraries
Your code
As I said already, such projects are beasts, just because all the three haven't been put together on the same page anywhere.
Ok So let's begin at the beginning.
Unavoidable theory for a newbie
GSSAPI helps a client application to provide credentials for a server to authoritatively identify the user. Extremely useful because the server applications can modulate their served responses if they wish to, as per the user. Very naturally therefore both - the client and the server applications must be kerberized, or as some would state kerberos-aware.
The kerberos based authentication, requires both the client and server applications, to be members of a Kerberos Realm. KDC (Kerberos Domain Controller) is the designated authority that rules the realm. Microsoft's AD servers are one of the most popularly experienced examples of a KDC, though you can of course be using a *NIX based KDC. But surely without a KDC there can be no Kerberos business at all. Desktops, Servers & workstations joined into the domain identify each other as long as all of them remain joined into the domain.
For your initial experiments, setup the client & server applications in the same realm.
Though Kerberos Authentication can surely be also used across realms by creating trusts between KDCs of these realms, or even merging keytabs from different KDCs that do not trust each other. Your code will not really need any change to accommodate such different and complex-sounding scenarios.
Kerberos Authentication basically works via "tickets (or tokens)". When a member joins the realm, the KDC "grants tokens" to each of them. These tokens are unique; time and FQDN are essential factors for these tickets.
Before you even think of the very first line of your code make sure you have got these two right:
Pitfall #1 When you setup your development and test environment, make sure everything is tested and addressed as FQDN. For example if you want to check connectivity, ping using FQDN, not IP. Needless to say therefore, they must necessarily have the same DNS service configuration.
Pitfall #2 Make sure all the host systems - that are running your KDC, client software, server software have the same time server. Time synchronization is something that one forgets, and realizes to be amiss after a lot of hair-splitting, and head-banging!
Both, the client and server applications NEED kerberos keytabs. So if your application is going to run in a *NIX host, and be a part of a Microsoft Domain, you have to get a kerberos keytab generated, before we start to look at the remaining preparatory steps for gss programming.
Step-by-Step Guide to Kerberos 5 (krb5 1.0) Interoperability at is an absolute must-read.
GSS-API Programming Guide is an excellent bookmark.
Depending upon your *NIX distribution you can install the headers & libraries for building your code. My suggestion however is to download the source and build it yourself. Yes, you might not get it right at one go, but it surely is worth the trouble.
Pitfall #3 Make sure that your application is running in an Kerberos aware environment.
I really learnt this the hard way, but maybe because I am not so smart. In my earliest stages of gssapi programming struggle, I had discovered that kerberos keytabs were absolutely necessary for making my application kerberos-aware. But I simply couldn't find anything about how to load these keytabs in my application. You know why?!! Because no such api exists!!!
Because: The application is to be run in an environment which is aware of the keytabs.
Ok, let me make this simple: Your application that is supposed to do the GSSAPI / Kerberos things has to run after you have set environment variable KRB5_KTNAME to the path where you have stored the keytabs. So either you do something like:
export KRB5_KTNAME=<path/to/your/keytab>
or make use of setenv to set KRB5_KTNAME in your application sufficiently before the very first line of your code that uses gssapi is run.
We are now ready to do the necessary things in the application's code.
I understand there are quite a few other aspects that must be reviewed by the application developer, to write and test an application. I know of a few environment variables, that can be important.
Can anybody please shed some more light upon that?

Is using Mirth Connect or any other interface engine overkill in this situation?

I've been assigned a small project and directed to use Mirth Connect as part of the solution. We currently do not use Mirth but because we have an upcoming project that will require an interface engine, I was asked to use it for this project so I can gain experience with it. However, I think it's a poor suggestion for this project; I also know my boss would not want me to implement something that adds unnecessary complexity just for the sake of learning.
With that said, I want to make sure I have valid reasons for suggesting that Mirth Connect should not be used for this project. Neither of us know much about it, but I think he's been convinced it is the end all solution for all things interface/webservice related. I appreciate any input I can get from those of you who have more experience with the product than I have.
This is a very simple project in that we have a client needing to make a handful of requests into our system from there's in order to retrieve and update data. For example, they will make a request to get patient demographics, to add an admission for a patient, a request to get a list of possible care settings from our application, etc. For this project we will not use HL7 but a set of predefined XML messages.
Both the client's application and our application reside on the client's network.
They do not want to build any services of their own, so the services we build need to handle all of the work. The results returned in response to their calls to the services will be returned as XML.
There are no plans to integrate any other applications with theirs or ours in the foreseeable future.
It seems to me the best option would be for us to build a standalone web service that would take their request and send back an XML response. I just don't see any reason to include Mirth Connect in the picture (other than for learning but that can be gained in other ways).
What are your thoughts? Is it true that the interface engine is not a good choice if the client wants to receive data from our system without having a receiving mechanism on their end? In other words, they want to make a web service call such as GetCareSettings and to get a response back with an XML representation of all the possible care settings in our system. It seems to me they would need a web service on their end for Mirth to use as a destination to send the results. All Mirth is going to send back is an ACK message, correct? (Unless of course it wrote the data to another webservice on the client end, which they have said they do not want to do.)
Thanks for taking the time to read this. I hope my lack of knowledge and understanding of Mirth Connect and the use of interface engines hasn't made this question difficult to answer.
From what I understand, Your client appears to be either a Lab or a third party service vendor, who will take inputs from your application like patient demographic charts, appointments, provider details etc. Basically he wants to query your application.
A) HL7: It has the capacity to handle query request and response with demographics. I am assuming that you have done you might be knowing about QRY messages.
B) XML/webservices/SOAP:still provides a viable solution, a little more concrete and can be expanded to Handle custom request like GetCallSettings, or may be any other. The vendor is not just interested in fetching patient related data but also other inputs for which HL7 might not be enough.
If we talk about approach, then its a professional advice to use an interface engine. It is not limited to just using mirth connect, you can also use Iguana if you want. A good reason which comes instantly to my mind is that an engine gives you an advantage while troubleshooting, support and maintenance activity.
Your Webservice responses can be handled easily by HTTP sender connector type and through RESTful webservices.
The engine is also capable of handling large volumes of request and responses at the same time, which in case is not required right now, but I think will be the condition later on. Your source in the channel shall change to an Webservice Listener.
Another good approach is to do away with XML and use JSON for handling request and responses, a much more light weighted than XML, to save your overhead with the network. We are doing some similar work, but we are sending request to a webservice through JSON.
Overall, Mirth is there to make your life more easier.
Good Luck!

C++ code on server, running on client machine

Is it possible to write C++ code to interface with a server, but to be executed on client side, but on the browser instead of native?
Like, for example, imagine using open source classes so that you produce a file.
But because you don't want all this work to be done on the server, you run it on the browser.
So that the client gives a file or two or more as inputs, then the code runs on his machine, the final result is produced, then this file is uploaded to database on the server.
please see google native client project. http://code.google.com/p/nativeclient/
This is strange question.
You can prepare binaries that do task that you want done on client side and make server send proper binary to client when asked for it. Client then runs this binary and returns results to server.
It is possible if you know configurations of client machines (binaries must work on them). Also it have to be some security layer implemented - you don't want to allow every binary run on client (imagine man-in-the-middle attack when some malicious code is run on client).
I think your request contradicts with the idea behind server-side programming. The main purpose in using server-side programs is to make use of infrastructural components like database, network, etc. in a controlled manner. (The most typical usage of server-side applications are web sites with server side coding like JSP and ASP.)
Since servers are machines that are to be kept secure, a remote application should not be permitted to make changes or access filesystem freely. If you want to do changes on a server like doing database operations or reading/writing files, you should use applications that run on the server or provide interfaces like web services or web sites to remote client applications.
So there are a couple solutions when if you want to do work on the browser, then have the results posted in a server database.
First of all, you must set up your server ready for database work. I have done this using the MEAN stack, set up a MongoDB and interfaced it with the Mongoose API.
Now, for the meat of the question, there are many examples of browsers doing intensive work. The majority of these applications thought is not C++, but it is Javascript.
If you really want to focus on C++ (like i did in the past, in the time i asked this question, wanting to make something big for college), then you could do one of the following:
*Use Google Native Client (NaCl). This is a sandbox for running compiled C and C++ code in the browser efficiently and securely, independent of the user’s operating system.
*Maybe you should want to check out Emscripten, which is a framwork for translating C and C++ code to jaascript. This way, you can have your C or C++ binaries that worked, and have them translated to Javascript, in order to have them work in the browser too.

how to test xmpp client?

I've developed a simple xmpp chat client (for Android, using asmack library). Now, I would like to test the client to see if it does what it is supposed to do (ie. fetch the list of contacts, refresh contact list, receive messages). Using smack library, I assume it is pretty much safe, but still...
How could I check if my fetched list of contacts is the one returned by the server? How to check if the presence status of certain contact is the correct one?
Regarding the usage of unit tests, I was thinking of mocking the server side and test the client side, but that doesn't seem of much use because I would like to test it with real server data.
Is there some automated tool for this? Or would it be enough just to distribute the application to my friends and tell them to use it for a while and report any misbehaviors?
You'll just have to trust aSmack. You could use logcat to investigate the XMPP stanzas returned by the server "by hand" and compare them to your client's behavior. You could also increase the verbosity on your server's log (if you have access) and compare that way. However, doing automated testing would require some sort of XMPP parser - but that's exactly what aSmack is. I'm sure the aSmack developers have already tested it thoroughly enough using their own methods.

Socket Server vs. Standard Servers

I'm working on a project of which a large part is server side software. I started programming in C++ using the sockets library. But, one of my partners suggested that we use a standard server like IIS, Apache or nginx.
Which one is better to do, in the long run? When I program it in C++, I have direct access to the raw requests where as in the case of using standard servers I need to use a scripting language to handle the requests. In any case, which one is the better option and why?
Also, when it comes to security for things like DDOS attacks etc., do the standard servers already have protection? If I would want to implement it in my socket server, what is the best way?
"Server side software" could mean lots of different things, for example this could be a trivial app which "echoes" everything back on a specific port, to a telnet/ftp server to a webserver running lots of "services".
So where in this gamut of possibilities does your particular application lie? Without further information, it's difficult to make any suggestions, but let's see..
Web Services, i.e. your "server side" requirement is to handle individual requests and respond having done some set of business logic. Typically communication is via SOAP/XML, and this is ideal if you bave web based clients (though nothing prevents your from accessing these services via standalone clients). Typially you host these on web servers as you mentioned, and often they are easiest written in Java (I've yet to come across one that needed to be written in C++!)
Simple web site - slightly different to the above, respods to HTML get/post requests and serves up static or dymanic content (I'm guessing this is not what you're after!)
Standalone server which responds to something specific, here you'd have to implement your own "messaging"/protocols etc. and the server will carry out a specific function on incoming request and potentially send responses back. Key thing here is that the server does something specific, and is not a generic container (at which point 1 makes more sense!)
So where does your application lie? If 1/2 use Java or some scripting language (such as Perl/ASP/JSP etc.) If 3, you can certainly use C++, and if you do, use a suitable abstraction, such as boost::asio and Google Protocol buffers, save yourself a lot of headache...
With regards to security, ofcourse bugs and security holes are found all the time, however the good thing with some of these OS projects is that the community will tackle and fix them. Let's just say, you'll be safer using them than your own custom handrolled imlpementation, the likelyhood that you'll be able to address all the issues that they would have encountered in the years they've been around is very small (no disrespect to your abilities!)
EDIT: now that there's a little more info, here is one possible approach (this is what I've done in the past, and I've jused Java most of the way..)
The client facing server should be something reliable, esp. if it's over the internet, here I would use a proven product, something like Apache is good or IIS (depends on which technologies you have available). IMHO, I would go for jBoss AS - really powerful and easily customisable piece of kit, and integrates really nicely with lots of different things (all Java ofcourse!) You could then have a simple bit of Java which can then delegate to your actual Server processes that do the work..
For the Server procesess you can use C++ if that's what you are comfortable with
There is one key bit which I left out, and this is how 1 & 2 talk to each other. This is where you should look at an open source messaging product (even more higher level than asio or protocol buffers), and here I would look at something like Zero MQ, or Red Hat Messaging (both are MQ messaging protocols), the great advantage of this type of "messaging bus" is that there is no tight coupling between your servers, with your own handrolled implementation, you'll be doing lots of boilerplate to get the interaction to work just right, with something like MQ, you'll have multiplatform communication without having to get into the details... You wil save yourself a lot of time and bother if you elect to use something like that.. (btw. there are other messaging products out there, and some are easier to use - such as Tibco RV or EMS etc, however they are commercial products and licenses will cost a lot of money!)
With a messaging solution your servers become trivial as they simply handle incoming messagins and send messages back out again, and you can focus on the business logic...
my two pennies... :)
If you opt for 1st solution in Nim's list (web services) I would suggest you to have a look at WSO's web services framework for C++ , Axis CPP and Axis2/C web services framework (if you are not restricted to C++). Web Services might be the best solution for your requirement as you can quickly build them and use either as processing or proxy modules on the server side of your system.