I'm trying to learn TDD, and my first project is a PHP-based project to help me organise my somewhat small MP3 collection. I'm aware that there are many other, much better solutions out there, but this is simply for the sake of getting to grips with TDD.
I require a method that will accept a filename, and return the duration of the MP3 using a command line call to ffmpeg. Is it possible to test this method short of pointing it to a real MP3? Should I just test the simple things, such as whether the file even exist? Should I bother testing it at all?
I'd apprecate your thoughts.
Many thanks in advance.
EDIT: I'm sorry for not mentioning that the call to ffmpeg is not via an class or API as such, but via the CLI.
$output = shell_exec("{$ffmpeg_exe} -i \"{$file_path}\" 2>&1");
This is where I'm having trouble testing. I don't think there's any way to mock this without using Runkit, which I'd like to avoid, since it can't be installed via Composer, and I'd like to avoid dependencies that need to be compiled.
It depends on what you want to call a "unit test" :)
In my opinion, a unit test should not depend on anything outside of the class under test. It definitely should not be making network requests, database calls, or touching the file system.
The test you describe is a test I would call an integration or acceptance test-- and writing a failing acceptance test is often the start to my personal TDD cycle. Then, if appropriate, I break that down into smaller pieces and write a failing unit test, and cycle red-green-refactor on multiple unit tests until the acceptance test is passing.
Another way to make your tests more isolated is to use test doubles (the general term for mocks, stubs, fakes, spies, etc). This might be the best approach in your case-- have a mock object that acts like an object would that interacts with the file system, but that you can control to act in a certain way. I would have an argument to pass this in, and if nothing is passed in for this, use the built in file interaction libraries.
I haven't done PHP in a while, but here's some pseudocode:
function duration(filename, filesystem)
filesystem = PHP File interacting object if filesystem isn't passed in
file = filesystem.find(filename)
return file.duration
That way, in your tests you can do something like:
test that the duration function returns the duration of the file whose name is passed in
fake_file = stub that returns 3:08 when we call duration on it
fake_filesystem = stub that returns fake_file when we call find on it with a filename
assert_equal duration("some_filename.mp3", fake_filesystem), 3:08
Again, super pseudocode!!! Best of luck!!
Response to your edit:
Aaahhh I see. I assume you're then parsing the $output to extract the information you want and to handle any errors that might be returned by running this command. That, IMO, is the interesting part of your code that should be tested. So I'd probably put the shell_exec within a function that I would then not bother testing, but in the tests for your parsing logic, I would stub the return value of this function to be able to test what you do when you get various output.
I'd also have at least one integration or acceptance test that would actually call shell_exec and actually need a file present at the $file_path.
This shell_exec line is basically doing string concatenation, so it's not really worth testing. TDD is not so much a rule as a guideline, and sometimes there are times you have to decide not to test :)
You can stub ffmpeg to not use the real files.
Related
I know there are existing tools for testing a ColdFusion application (MXUnit, MockBox), but I'm creating a custom tool, so that it will require less configuration.
When I run a unit test file, it's done via a generic 'model' which retrieves all functions from the unit test file. Within each test function, I have to call assertEquals -- but these functions are in the model, so I cannot access them.
I tried by passing the model itself to the unit test file so it can call the models functions directly but it doesn't work and it adds logic to the test file, which I don't like.
I can also extend the model in the test file but I will have to call directly the test file, call super.init(this) so the model can fetch test functions, etc..
Is there a way to achieve this kind of process? What's the best option?
In answer to your question, it sounds like you want to inject variables / methods into the subject under test. You can do it like so:
myInstance["methodName"] = myFunction;
You can then call the injected method like so:
myInstance.myFunction();
Both MXUnit and TestBox use this technique.
Having said that I don't quite understand why you want to re-invent the wheel. TestBox is an excellent, proven testing framework which has a wealth of features which would take you an incredible amount of time to replicate. I'm not quite sure what the configuration issue you have could be - it really doesn't require very much setup. Maybe it might be worth asking how to setup and use TestBox rather than how to build your own testing solution :)
There is a good book (which is available in a free version) which you can read on TestBox here : http://testbox.ortusbooks.com/
Good luck!
I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.
Unit and integration testing is usually performed as part of a development process, of course. I'm looking for ways to use this methodology in configuration of an existing system, in this case the Asterisk soft PBX.
In the case of Asterisk, the configuration file is as much a programming language as anything else, complete with loops, jumps, conditionals, etc., and can get quite complex. Changes to the configuration often suffers from the same problems as changes to a complex software product - it can be hard to foresee all the effects without tests in place. It's made worse by the fact that the nature of the system is to communicate with external entities, i.e. make phone calls.
I have a few ideas about testing the system using call files (to create specific calls between extensions) while watching the manager interface for generated events. A test could then watch for an expected result, i.e. dialling *99# should result in the Voicemail application getting called.
The flaws are obvious - it doesn't test the actual result, only what the system thinks is the result, and it probably requires some modification of the system under test. It's also really hard to write these tests robustly enough to only trigger on the expected output, especially if the system is in use (i.e. there are other calls in progress).
Is what I want, a testing system for Asterisk, impossible? If not, do you have any ideas about ways to go about this in a reasonable manner? I'm willing to put a fair amount of development time into this and release the result under a friendly license, but I'm unsure about the best way to approach it.
This is obviously an old question, so there's a good chance that when the original answers were posted here that Asterisk did not support unit / integration testing to the extent that it does today (although the Unit Test Framework API went in on 12/22/09, so that, at least, did exist).
The unit testing framework (David's e-mail from the dev list here) lets you execute unit tests directly within Asterisk. Tests are registered with the framework and can be executed / viewed through the CLI. Since this is all part of Asterisk, the tests are compiled into the executable. You do have to configure Asterisk with the --enable-dev-mode option, and mark the tests for compilation using the menuselect tool (some applications, like app_voicemail, automatically register tests - but they're the minority).
Writing unit tests is fairly straight-forward - and while it (obviously) isn't as fully featured as a commercial unit test framework, it gets the job done and can be enhanced as needed.
That most likely isn't what the majority of Asterisk users are going to want to use - although Asterisk developers are highly encouraged to check it out. Both users and developers are probably interested in integration tests, which the Asterisk Test Suite provides. At its core, the Test Suite is a python script that executes other scripts - be they lua, python, etc. The Test Suite comes with a set of python and lua libraries that help to orchestrate and execute multiple Asterisk instances. Test writers can use third party applications such as SIPp or Asterisk interfaces (AMI, AGI) or a combination thereof to test the hosted Asterisk instance(s).
There are close to 200 tests now in the Test Suite, with more being added on a fairly regular basis. You could obviously write your own tests that exercise your Asterisk configuration and have them managed by the Test Suite - if they're generic enough, you could submit them for inclusion in the Test Suite as well.
Note that the Test Suite can be a bit tricky to set up - Leif wrote a good blog post on setting up the Test Suite here.
Well, it depends on what you are testing. There are a lot of ways to handle this sort of thing. My preference is to use Asterisk Call Files bundled with dialplan code. EG: Create a callfile to dial some public number, once it is answered, hop back to the specified dialplan context and perform all of my testing logic (play soundfiles, listen for keypresses, etc.)
I wrote an Asterisk call file library which makes this sort of testing EXTREMELY easy. It has a lot of documentation / examples too, check it out here: http://pycall.org/. That may help you.
Good luck!
You could create a set of specific scenarios and use Asterisk's MixMonitor command to record these calls. This would enable you to establish a set of sound recordings that were normative for your system for these tests, and use an automated sound file comparison tool (Perhaps something from comparing-sound-files-if-not-completely-identical?) to examine the results. Just an idea.
Unit testing as opposed to integration testing means your code is supposed to be architectured so the logic itself is insulated from external dependencies. You said "the configuration file is as much a programming language as anything else" but that's the thing --- real languages has not just control flow but abstraction capabilities, which allow you to write the logic in a way that can be unit tested. That's why I keep logic outside of asterisk as much as possible.
For integration testing, script linphonec to drive your application, and grep the asterisk console to see what it's doing.
You can use docker, and fire up temporary asterisk instances for each test.
I am a heavy advocate of proper Test Driven Design or Behavior Driven Design and I love writing tests. However, I keep coding myself into a corner where I need to use 3-5 mocks in a particular test case for a single class. No matter which way I start, top down or bottom up I end up with a design that requires at least three collaborators from the highest level of abstraction.
Can somebody give good advice on how to avoid this pitfall?
Here's a typical scenario. I design a Widget that produces a Midget from a given text value. It always starts really simple until I get into the details. My Widget must interact with several hard to test things like file systems, databases, and the network.
So, instead of designing all that into my Widget I make a Bridget collaborator. The Bridget takes care of one half of the complexity, the database and network, allowing me to focus on the other half which is multimedia presentation. So, then I make a Gidget that performs the multimedia piece. The entire thing needs to happen in the background, so now I include a Thridget to make that happen. When all is said and done I end up with a Widget that hands work to a Thridget which talks over a Bridget to give its result to a Gidget.
Because I'm working in CocoaTouch and trying to avoid mock objects I use the self-shunt pattern where abstractions over collaborators become protocols that my test adopts. With 3+ collaborators my test balloons and become too complicated. Even using something like OCMock mock objects leaves me with an order of complexity that I'd rather avoid. I tried wrapping my brain around a daisy-chain of collaborators (A delegates to B who delegates to C and so on) but I can't envision it.
Edit
Taking an example from below let's assume we have an object that must read/write from sockets and present the movie data returned.
//Assume myRequest is a String param...
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
DataProcessor aProcessor = ...;
// This gets broken into a "Network" collaborator.
for(stuff in myRequest.charArray()) aOut.write(stuff);
Object Data = aIn.read(); // Simplified read
//This is our second collaborator
aProcessor.process(Data);
Now the above obviously deals with network latency so it has to be Threaded. This introduces a Thread abstraction to get us out of the practice of threaded unit tests. We now have
AsynchronousWorker myworker = getWorker(); //here's our third collaborator
worker.doThisWork( new WorkRequest() {
//Assume myRequest is a String param...
DataProcessor aProcessor = ...;
// Use our "Network" collaborator.
NetworkHandler networkHandler = getNetworkHandler();
Object Data = networkHandler.retrieveData(); // Simplified read
//This is our multimedia collaborator
aProcessor.process(Data);
})
Forgive me for working backwards w/o tests but I'm about to take my daughter outside and I'm rushing thru the example. The idea here is that I'm orchestrating the collaboration of several collaborators from behind a simple interface that will get tied to a UI button click event. So the outter-most test reflects a Sprint task that says given a "Play Movie" button, when it is clicked, the movie will play.
Edit
Lets discuss.
Having many mock objects shows that:
1) You have too much dependencies.
Re-look at your code and try to break it further down. Especially, try to separate data transformation and processing.
Since I don't have experience in the environment you are developing in. So let me give my own experience as example.
In Java socket, you will be given a set of InputStream and OutputStream simple so that you can read data from and send data to your peer. So your program looks like this:
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
// Read data
Object Data = aIn.read(); // Simplified read
// Process
if (Data.equals('1')) {
// Do something
// Write data
aOut.write('A');
} else {
// Do something else
// Write another data
aOut.write('B');
}
If you want to test this method, you have to ends up create mock for In and Out which may require quite a complicated classes behind them for supporting.
But if you look carefully, read from aIn and write to aOut can be separated from processing it. So you can create another class which will takes the read input and return output object.
public class ProcessSocket {
public Object process(Object readObject) {
if (readObject.equals(...)) {
// Do something
// Write data
return 'A';
} else {
// Do something else
// Write another data
return 'B';
}
}
and your previous method will be:
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
ProcessSocket aProcessor = ...;
// Read data
Object Data = aIn.read(); // Simplified read
aProcessor.process(Data);
This way you can test the processing with little need for mock. you test can goes:
ProcessSocket aProcessor = ...;
assert(aProcessor.process('1').equals('A'));
Becuase the processing is now independent from input, output and even socket.
2) You are over unit testing by unit test what should be integration tested.
Some tests are not for unit testing (in the sense that it require unnecessarily more effort and may not efficiently get a good indicator). Examples of these kind of tests are those involving concurrency and user interfaces. They require different ways of testing than unit testing.
My advice would be that you further break them down (similar to the technique above) until some of them are unit-test suitable. So you have the little hard-to-test parts.
EDIT
If you believe you already broken it into very fine pieces, perhaps, that is your problem.
Software components or sub-components are related to each other in some way like characters are combined to words, words are combined to sentences, sentences to paragraphs, paragraphs to subsection, section, chapters and so on.
My example says, your should broken subsection to paragraphs and you things you already downs to words.
Look at it this way, most of the time, paragraphs are related to other paragraphs in a less loosely degree than sentences related (or depends on) other sentences. Subsection, section are even more loosely while words and characters are more dependent (as the grammatical rules kick in).
So perhaps, you are breaking it so fine that the language syntax force to those dependencies and in turn forcing you to have so much mock objects.
If that is the case, your solution is to balance the test. If a part are depended by many and it is require a complex set of mock object (or simple more effort to test it). May be you don't need to test it. For example, If A uses B,C uses B and B is so damn hard to test. So why don't you just test A+B as one and C+B as anther. In my example, if SocketProcessor is so hard to test, too hard to the point that you will spend more time testing and maintain the tests more than developing it then it is not worth it and I will just test the whole things at once.
Without seeing your code (and with the fact that I am never develop CocaoTouch) it will be hard to tell. And I may be able to provide good comment here. Sorry :D.
EDIT 2
See your example, it is pretty clear that you are dealing with integration issue. Assuming that you already test play movie and UI separatedly. It is understandable why you need so much mock objects. If this is the first time you use these kind of integration structure (this concurrent pattern), then those mock objects may actually be needed and there is nothing much you can do about it. That's all I can say :-p
Hope this helps.
My solution (not CocoaTouch) is to continue to mock the objects, but to refactory mock set up to a common test method. This reduces the complexity of the test itself while retaining the mock infrastructure to test my class in isolation.
I do some fairly complete testing, but it's automated integration testing and not unit-testing, so I have no mocks (except the user: I mock the end-user, simulating user-input events and testing/asserting whatever's output to the user): Should one test internal implementation, or only test public behaviour?
What I'm looking for is best practices using TDD.
Wikipedia describes TDD as,
a software development technique that
relies on the repetition of a very
short development cycle: First the
developer writes a failing automated
test case that defines a desired
improvement or new function, then
produces code to pass that test and
finally refactors the new code to
acceptable standards.
It then goes on to prescribe:
Add a test
Run all tests and see if the new one fails
Write some code
Run the automated tests and see them succeed
Refactor code
I do the first of these, i.e. "very short development cycle", the difference in my case being that I test after it's written.
The reason why I test after it's written is so that I don't need to "write" any tests at all, even the integration tests.
My cycle is something like:
Rerun all automated integration tests (start with a clean slate)
Implement a new feature (with refactoring of the existing code if necessary to support the new feature)
Rerun all automated integration tests (regression testing to ensure that new development hasn't broken existing functionality)
Test the new functionality:
a. End-user (me) does user input via the user interface, intended to exercise the new feature
b. End-user (me) inspects the corresponding program output, to verify whether the output is correct for the given input
When I do the testing in step 4, the test environment captures the user input and program output into a data file; the test environment can replay such a test in the future (recreate the user input, and assert whether the corresponding output is the same as the expected output captured previously). Thus, the test cases which were run/created in step 4 are added to the suite of all automated tests.
I think this gives me the benefits of TDD:
Testing is coupled with development: I test immediately after coding instead of before coding, but in any case the new code is tested before it's checked in; there's never untested code.
I have automated test suites, for regression testing
I avoid some costs/disadvantages:
Writing tests (instead I create new tests using the UI, which is quicker and easier, and closer to the original requirements)
Creating mocks (required for unit testing)
Editing tests when the internal implementation is refactored (because the tests depend only on the public API and not on the internal implementation details).
To get rid of excessive mocking you can follow the Test Pyramid which suggests having a lot of unit + component tests and a smaller number of slow & fragile system tests. It boils down to several simple rules:
Write tests at the lowest possible level that wouldn't require mocking. If you can write a unit test (e.g. parsing a string), then write it. But if you want to check whether the parsing is invoked by the upper layer, then this would require initializing more of the stuff.
Mock external systems. Your system needs to be a self-contained, independent piece. Relying on external apps (which would have their own bugs) would complicate testing a lot. Writing mocks/stubs is much easier.
After that have couple of tests checking your app with real integrations.
With this mindset you eliminate almost all mocking.
I'm writing an upload function for ColdFusion of Wheels and need to unit test it once it's finished. The problem I'm having though is that I have no idea on how to create a mock multi-part form post in ColdFusion that I can use in my unit tests.
What I would like to be able to do is create the mock request simulating a file being uploaded that cffile could then process and I could check against.
I saw in the online ColdFusion help, an example of creating such a request using cfhttp, however it has to post to another page which kind of defeats that whole purpose.
Great question rip. For what it's worth, I contribute to MXUnit (wrote the eclipse plugin) and this scenario came up in a presentation I did at cfobjective this year on writing easier-to-test code (http://mxunit.org/doc/zip/marc_esher_cfobjective_2009_designing_for_easy_testability.zip).
In this scenario, I'd suggest NOT testing the upload. I believe we shouldn't spend time testing stuff that's not our code. The likelihood that we'll catch a bug or other oddity is sufficiently low for me to justify leaving it untested. I believe we should be testing "our" code, however.
In your scenario, you have two behaviors: 1) the upload and 2) the post-upload behavior. I'd test the post-upload behavior.
This now frees your unit test to not care about the source of the file. Notice how this actually results in a decoupling of the upload logic from the "what do I do with the file?" logic. Taken to its conclusion, this at least creates the potential (theoretically) to reuse this post-upload logic for stuff other than just uploads.
This keeps your test much easier, because now you can just test against some file that you put somewhere in the setUp of the unit test itself.
So your component changes from
<cfffunction name="uploadAndDoStuff">
to
<cffunction name="upload">
and then
<cffunction name="handleUpload">
or "handleFile" or "doSomethingWithFile" or "processNetworkFile" or some other thing. and in your unit test, you leave upload() untested and just test your post-upload handler. For example, if I were doing this at work, and my requirements were: "Upload file; move file to queue for virus scanning; create thumbnails if image or jpg; add stuff to database; etc" then I'd keep each of those steps as separate functions, and I'd test them in isolation, because I know that "upload file" already works since it's worked since CF1.0 (or whatever). Make sense?
Better yet, leave the "upload" out of the component entirely. nothing wrong with keeping it in a CFM file since there's kind of not much point (as far as I can see) in attempting to genericize it. there might be benefits where mocking is concerned, but that's a different topic altogether.
I did a quick search for MXUnit testing a form upload and came up with this google groups thread : Testing a file upload
Its a discussion between Peter Bell and Bob Silverberg - the outcome of which is that testing a file upload is actually part of acceptance testing as opposed to a unit test.
I know that doesn't strictly answer your question, but I hope it helps.
Strictly speaking you are right. If you have to go out to make a call to an external resource do test something ( database, web service, file uploader) it does defeat the purpose of unit testing. The best general advice is to mock out behaviours of external resources and assume they function or are covered by their own unit tests.
Pragmatism can alter this though, on the Model-Glue framework codebase, we have a number of unit tests that call out to external resources, like for persisting values across a redirect, connecting the AbstractRemotingService functionality and so on. These were deemed important enough features to unit test and we chose to make external dependencies of our unit tests to ensure good coverage. After all, in framework code, there IS NOT acceptance tests. Those are done by our users and users of a framework expect flawless code, as they should.
So there are cases where you want to test a vital external resource and you want it handled by an automated function, it can make sense to add it to your unit tests. Just know you are deviating away from a best practice that is there for a reason.
Dan Wilson