How to integration/unit test software hardware interfaces - unit-testing

I'm working on a small fun projects that builds a robot.
We as the programmers are working parallel to the people building the robot. So it is very often the case that we are trying to run changed software and the builders have changed the hardware. If the software tests are not running it is always a hard thing to figure out if the software or the hardware fails or even worse if the integration fails.
There are some hard parts with an automatic testing for this issues.
We have figured out some ways of breaking things down so we have rc control to let the robot go through some movements without software assuring that he still works.
Then we start some software tests that make the robot going some defined figures to show that the software behaves in the same way as before. But this always comes down to a very time consuming task because you can't automate it and someone has to start the test, watch the test and try to figure out if the robot did what it should do.
Another problem is that constant testing with our real hardware is wearing out parts of our hardware, joint, motors, gear wheels and so on.
But not testing has proven to cause so much trouble and consume so much time that I would like to know what kind of techniques are used in other projects which are dealing with hardware software interaction and if there are tools out there that can be used.

The interface between the robot and the software must be defined first ; not necessarily exhaustively, this could be done incrementally. Start small, for instance with basic moves (forward, backward), then, once it has been fully tested, both in isolation and integrated, add some behaviour (e.g. turn left, turn right), retest. That way, the whole team can use what it learned all along the project to extend the interface, possibly minimizing interface reworks.
The Progress before Hardware article describes such a process in greater details, focusing on the Test-Driven-Development (TDD) aspect.
See also answers to the How to do TDD with hardware question.

I think it's a very interesting situation.
I believe there's no problem with your testing process. If you mock your robot and test against this mock, it's all good.
If the hardware robot acts different as your mocked robot, there's another big problem: The communication.
The interface between the software and the hardware is the "protocol" specification. In my opinion it must not be changed without discussion. The hardware guys may not change it and you software guys not, too! You only may change it together. In your situation, everybody changes it on his own.
In your situation, your teams seem to work against each other. So try to focus your efforts in your interface and especially your communication, not in your integration test that won't work anyway.
A suggestion of mine would be to use a robot's software mock as the one and only specification. So you can rely on your mock and there's a central point that defines the connection between hard- and software.
When the software guys want to change it, ok. They have to discuss it with you and you will change the software mock. If the hardware was changed and the mock not, you have an apology, because you develop against your specification.
Good luck!

Related

Testing... how do the pros do it, and what techniques can scale to single-person development?

I've been writing software for years, but have never mastered the art of testing. My typical testing includes thorough run-throughs on my machines, and then testing in various operating systems via VMware. Mainly a brute-force play-with-it-until-it-breaks-or-doesn't approach. Where possible I work on actual hardware, but this isn't always practical.
My question is twofold:
How do medium-sized professional development houses do their testing?
What common techniques or procedures (outside of unit testing) can apply to a developer team of one. I'm looking for practicality.
Thank you for your time and input.
Step 1: Unit Testing
Divide your software into components (which can be anything from single functions up to whole programs) and unit-test those components thoroughly, especially as relates to the API and behavior that the rest of the application can see. (Don't forget to check for failure modes too, but beware of binding too carefully to the exact nature of a failure; it's often good enough to just test for the presence of the right class of exception rather than its exact message.) Make sure those tests pass; you're honing them in on a specification of what the component should be doing. (Automated test running helps here, as does a CI system.) This is important because of…
Step 2: Integration Testing
Test that the compositions of components that make up the application work (this is integration testing). Ideally you'll only be finding bugs in the specifications of things at this point (hah!) and wherever you identify that a component is wrong despite passing its unit tests, that tells you that there is a bug. Whenever things fail to work together despite being told to do so, you've probably got a bug in your specs from the previous step so you typically resolve these things by adding more detail to your unit tests and fixing the components until they work.
Note that to make good integration, you want to keep this stage so that the integration itself is sufficiently simple that it is in the “Obviously No Bugs” class of programs instead of the larger “No Obvious Bugs” class. An integration framework like Spring or a scripting language can help a lot here (though with the latter you have to guard against creating components on the sly; if you create a component then admit it and make sure it has a proper usage contract and unit tests to ensure that it meets its contract).
Where you can, you can make components by composing others together; these higher-level components need to be unit tested as characterized in Step 1 above. This might sound like extra work – it probably is – but it does have the advantage of meaning that you can use automated tests for larger parts of the program. (Alas, it's harder to do all integration tests with an automated test tool; such things tend to work better doing unit tests where you can mock out all the irrelevant parts.) But this doesn't save you from…
Step 3: Acceptance Testing
This is where the overall application is tested to see if it actually does what is desired. This might be automatable, but usually isn't. This is level where you bring in users to let them see whether things are what they expected, though you might want to use internal testers a bit first. How easy this all is depends on the nature of the application.
Note also that user interfaces tend to spend more time in this step than the others, precisely because what makes for a good UI is difficult to impossible to pin down in algorithms (it relates much more to human psychology after all).
A final note: What I've written here sounds like testing is meant to be a laborious process that takes ages at the end of a project. It isn't! You can often get parts of an application done before others, do an integration of those parts (with mocks for the other bits) and test quite a bit of how acceptable this sub-application really is. Of course, when doing this take care to stop users from believing that everything is done; one way is to have dialog boxes that pop up and say things like “magic to happen here”. Silly but effective. :-)
For a small team unit tests or automatic integration tests are crucial. Because there are no hands and time for manual testing - the more you automate the better. This includes Continuous Integration.
Set up a separate 'beta' environment that is as close to your production environment as possible. Do most of your tests there - this way you will pick up all the things you've forgotten in your 'release plan'.
As a proffesional tester my suggestion is that you should have a healthy mix of automated and manual testing. The Examples below are in .net but it should be easy to find a tool for whatever technique you are using.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
My testing tool examples are Java based, but I will try to suggest tools which are ported to multiple languages or are language agnostic.
Use unit testing tools like junit (ported to a variety of languages). This will allow you to refactor your code safely. Most code bugs should result in the addition or correction of at least one test.
Use revision control, and setup an automated build environment that check out the code and builds the code. It should then run the automated test suite. If the application uses a database the build environment should have its own database. Use different code branched for production (released) and development code.
Use integration testing tools like HTTPunit or Synergy to test web applications. Tools of this type are basically language agnostic, but your may want to choose a tool which can be extended in the language(s) you are using. For non-web applications, there may not be an equivelent tool for your platform. You may also want to use a performance tool like JMeter.
These tools have some setup costs, but a quick payback. Overall development time may be the same or less than if you didn't use the tools.
Acceptance tests generally don't lend themselves to automated testing. Where they do, included them in the integrated testing. Get acceptance feedback as early and often as possible.
How do the pros do it? That all depends on who the 'Pro' is... There are dozens of different approaches to testing, and plenty of experts to tell you that their way is the one true way. Agile gurus will tell you a very different story from the waterfall gurus. The ISTBQ guys will tell you a very different from the Context-Driven guys.
Unfortunately there is no one true way, and you have to figure it out for yourself. Your approach to testing depends on too many factors. That's probably not very helpful, but you just need to be aware that any answer you get here will be only one option of many, and it may be completely inappropriate for your situation.
Personally, after several years in software testing, I have decided to align myself with the Context Driven school of software testing. See: http://www.context-driven-testing.com
Secondly, from your description of you current approach, that sounds a lot like exploratory testing to me. You may find this material interesting: satisfice.com/sbtm/
One thing you can do (combined with all the previous suggestions) is identify the risky and critical areas of your app and try to focus your testing efforts on these areas.

What is test-driven development (TDD)? Is an initial design required?

I am very new to test-driven development (TDD), not yet started using it.
But I know that we have to write tests first and then the actual code to pass the test and refactor it till the design is good.
My concern over TDD is where it fits in our systems development life cycle (SDLC).
Suppose I get a requirement of making an order processing system.
Now, without having any model or design for this system, how can I start writing tests?
Shouldn't we require to define the entities and their attributes to proceed?
If not, is it possible to develop a big system without any design?
There is two levels of TDD, ATDD or acceptance test driven development, and normal TDD which is driven by unit tests.
I guess the relationship between TDD and design is influenced by the somewhat "agile" concept that source code IS the design of a software product. A lot of people reinforce this by translating TDD as Test Driven Design rather than development. This makes a lot of sense as TDD should be seen as having a lot more to do with driving the design than testing. Having acceptance and unit tests at the end of it is a nice side effect.
I cannot really say too much about where it fits into your SDLC without knowing more about it, but one nice workflow is:
For every user story:
Write acceptance tests using a tool like FitNesse or Cucumber, this would specify what the desired outputs are for the given inputs, from a perspective that the user understands. This level automates the specifications, or can even replace specification documentation in ideal situations.
Now you will probably have a vague idea of the sort of software design you might need as far as classes / behaviour etc goes.
For each behaviour:
Write a failing test that shows how calling code you would like to use the class.
Implement the behaviour that makes the test pass
Refactor both the test and actual code to reflect good design.
Go onto the next behaviour.
Go onto the next user story.
Of course the whole time you will be thinking of the evolving high level design of the system. Ideally TDD will lead to a flexible design at the lower levels that permits the appropriate high design to evolve as you go rather than trying to guess it at the beginning.
It should be called Test Driven Design, because that is what it is.
There is no practical reason to separate the design into a specific phase of the project. Design happens all the time. From the initial discussion with the stakeholder, through user story creation, estimation, and then of course during your TDD sessions.
If you want to formalize the design using UML or whatever, that is fine, just keep in mind that the code is the design. Everything else is just an approximation.
And remember that You Aren't Gonna Need It (YAGNI) applies to everything, including design documents.
Writing test first forces you to think first about the problem domain, and acts as a kind of specification. Then in a 2nd step you move to solution domain and implement the functionality.
TDD works well iteratively:
Define your initial problem domain (can be small, evolutionary prototype)
Implement it
Grow the problem domain (add features, grow the prototype)
Refactor and implement it
Repeat step 3.
Of course you need to have a vague architectural vision upfront (technologies, layers, non-functional requirement, etc.). But the features that bring added-value to your your application can be introduced nicely with TDD.
See related question TDD: good for a starter?
With TDD, you don't care much about design. The idea is that you must first learn what you need before you can start with a useful design. The tests make sure that you can easily and reliably change your application when the time comes that you need to decide on your design.
Without TDD, this happens: You make a design (which is probably too complex in some areas plus you forgot to take some important facts into account since you didn't knew about them). Then you start implementing the design. With time, you realize all the shortcomings of your design, so you change it. But changing the design doesn't change your program. Now, you try to change your code to fit the new design. Since the code wasn't written to be changed easily, this will eventually fail, leaving you with two designs (one broken and the other in an unknown state) and code which doesn't fit either.
To start with TDD, turn your requirements into test. To do this, ask "How would I know that this requirement is fulfilled?" When you can answer this question, write a test that implements the answer to this question. This gives you the API which your (to be written) code must adhere to. It's a very simple design but one that a) always works and b) which is flexible (because you can't test unflexible code).
Also starting with the test will turn you into your own customer. Since you try hard to make the test as simple as possible, you will create a simple API that makes the test work.
And over time, you'll learn enough about your problem domain to be able to make a real design. Since you have plenty of tests, you can then change your code to fit the design. Without terminally breaking anything on the way.
That's the theory :-) In practice, you will encounter a couple of problems but it works pretty well. Or rather, it works better than anything else I've encountered so far.
Well of course you need a solid functional analysis first, including a domain model, without knowing what you'll have to create in the first place it's impossible to write your unit tests.
I use a test-driven development to program and I can say from experience it helps create more robust, focussed and simpler code. My recipe for TDD goes something likes this:
Using a unit-test framework (I've written my own) write code as you wish to use it and tests to ensure return values etc. are correct. This ensures you only write the code you're actually going to use. I also add a few more tests to check for edge cases.
Compile - you will get compiler errors!!!
For each error add declarations until you get no compiler errors. This ensures you have the minimum declarations for your code.
Link - you will get linker errors!!!
Write enough implementation code to remove the linker errors.
Run - you unit tests will fail. Write enough code to make the test succeed.
You've finished at this point. You have written the minimum code you need to implement your feature, and you know it is robust because of your tests. You will also be able to detect if you break things in the future. If you find any bugs, add a unit test to test for that bug (you may not have thought of an edge case for example). And you know that if you add more features to your code you won't make it incompatible to existing code that uses your feature.
I love this method. Makes me feel warm and fuzzy inside.
TDD implies that there is some existing design (external interface) to start with. You have to have some kind of design in mind in order to start writing a test. Some people will say that TDD itself requires less detailed design, since the act of writing tests provides feedback to the design process, but these concepts are generally orthogonal.
You need some form of specification, rather than a form of design -- design is about how you go about implementing something, specification is about what you're going to implement.
Most common form of specs I've seen used with TDD (and other agile processes) are user stories -- an informal kind of "use case" which tends to be expressed in somewhat stereotyped English sentences like "As a , I can " (the form of user stories is more or less rigid depending on the exact style/process in use).
For example, "As a customer, I can start a new order", "As a customer, I can add an entry to an existing order of mine", and so forth, might be typical if that's what your "order entry" system is about (the user stories would be pretty different if the system wasn't "self-service" for users but rather intended to be used by sales reps entering orders on behalf of users, of course -- without knowing what kind of order-entry system is meant, it's impossible to proceed sensibly, which is why I say you do need some kind of specification about what the system's going to do, though typically not yet a complete idea about how it's going to do it).
Let me share my view:
If you want to build an application, along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc. on the other hand TDD changes your mindset.
Commonly, you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. this is a Syntax debugging driven development . but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
In practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

Forcing Unit Testing on Developers

First a little background. The company I work for writes web based software that is a hosted solution for our customers (ie ASP (Application Service Provider)). We are adopting agile practices such as Scrum and we execute sprints to build new features for our product.
I am a proponent of TDD (Test Driven Design), and as a part of what I deliver in a sprint I always write tests and I always get them integrated with the build (ie ccnet); however the other developers do not follow this practice and it is not enforced.
Is it a good practice to force a development group into providing unit tests as a part of what is delivered in a sprint?
Unless you are a position of authority, the best thing you can do is to convince them of the value of the test suite.
It's very difficult to get developers to see the light on this issue if they aren't seeing it done right.
Try to pair with another developer and show them the benefits and the clarity that comes from writing the tests FIRST. If you don't do this, they are likely to write all of their code, get it working, and then write tests. So, from their point of view, it will feel like simply an extra task that doesn't help them get things done.
Also keep in mind that people often do not understand how to write good tests. Even more, some do not know how to make use of tools like jmock, which can lead to them getting stuck and giving up on writing a test.
Forcing anything onto anybody is not a good practise in my view. I would show them the benefits of TDD at every opportunity I get. This should automatically get the rest of the team to voluntarily practise TDD.
You don't want lip-service unit testing, you want whole-hearted unit testing. That isn't something that can be forced. What you need to do is influence your teammates over time to see the benefits of unit testing and to develop a unit testing culture.
To start with you need to understand that different people change for different reasons. In Crossing the Chasm terms, visionaries will adopt new techniques because they are better, but pragmatists adopt new techniques either because they solve a problem/pain the currently have or because everyone else is adopting it.
Your mission then it to show how unit testing can solve a pain your team currently feels. As you win over people one-by-one eventually you can reach a tipping point where unit testing is the norm and everyone goes along with it. However if you can't tie unit testing to a pain your team feels then your efforts to convince them will likely fail.
Considering unit-testing improves code and software quality on a long term, I would say that, yes, it is good practice to have your developpers provide unit-tests -- be it part of some kind of sprint or not.
The two main barriers to unit-tests I've seen are :
the developpers don't get the point : "why would we write more code just to test ?"
"we don't have time to write unit-tests"
To answer the first point, you'll have to provide some sort of demonstration / formation, I suppose ; if you can get developpers to see why unit-tests are useful, they will like them, and use/develop them ; but they need to seen why those are useful : unless you are their boss, you cannot force people to develop unit-tests.
And, even if you are their boss, they will probably not do the best possible job if they are being forced : unit-testing is often done better if people understand why and how !
To answer the second point... Well, you obviously need to get your developpers some "special" time to developp unit-tests ; it can mean less time to do manual testing, btw.
Another thing is : it is hard to know "what to test", and "how to test" : you will need to explain / demonstrate that to your colleagues : some things cannot be tested, some things don't need to be, and some things are not "unit-testable" -- well, I suppose, unless you software is really well engineered ^^
I've gotten in that position on many jobs and contracts in the past, so I've finally gotten discouraged and embraced the darkness by advocating Development Driven Development.
I've found that when most managers embrace XP, they're embracing throwing out the documentation, not really doing TDD. Programmers on most teams are rewarded for quick hacks that get the defect out of their queue, and it's one manager in about ten who has the guts to stand up to senior management as an advocate of the overall quality of the product as opposed to the bottom-line-for-this-quarter way of doing business. After all, most software jobs that pay anything are corporate sponsored, and Freud proved that corporations are insane.
Or at least, he should have.
There is a great Joel on Software article on this called Getting Things Done When You're Only a Grunt. His strategies applied to united testing would be the following:
Just Do It
Most important, I think. Regularly write tests, do not make it appear as if you yourself see them as a minor part of development.
Harness the Power of Viral Marketing
If you see an error in someone else's code, write a test that triggers it and present both to him. Maybe he sees your point.
Create a Pocket of Excellence
Identify the team members who are open to the idea but not quite sure how to work with unit tests and set up a number of test classes until everyone of them knows how to do it. Then turn towards the other ones. Things are a lot easier to establish once you are not the only one anymore.
Neutralize The Bozos
There will be team members who are almost impossible to get to write tests. Maybe those should be dealt with by regularly breaking their code with a new commit - and then pointing out that since they don't have tests it was hard for you to notice.
it depends on your Version Control, but there are Version Control Software that enable you to run scripts before check-in or before merge to the Production branch. and it will not let the developer check in if the unit testing is failed.
First, talk to your manager. If he is convinced that testing is a good thing, add a coverage test to your build system. If the coverage of your unit tests falls below a certain level, you can handle it as a failed build. This gives your colleagues a measure, a way to see when the fail to deliver tested code.
In your case, NCover seems to integrate nicely into CC.NET.
Create a matrix that you want developers to achieve.
Make sure there is a subtask for Unit test creation for every story.
No. In my experience, TDD isn't all that useful in practice. I sometimes use it for really general fundamental classes (like geometry or generic data structures) that lend themselves naturally to automated tests. But for UI components or business logic, I find it's more trouble than it's worth.

Is Scrum possible without test driven development? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have now witnessed two companies move to agile development with Scrum.
In both cases the standard of coding was good enough when each part of the application was only being worked on by one or two developers with the developers spending a reasonable amount of time working on one part of the application before moving to the next task. The defect rates were also reasonable.
However with Scrum the developers are expected:
to all be able to work on all the bits of the application.
to only work on one area of the application for a few days at most before moving to the next area
to mostly work on code they did not write
Code qualities became an issue in both of the Scrum projects.
So is there a way to do Scrum that does not lead to these problems, without first getting all the developers to do test driven development?
Have you seen Scrum work well on a large project without test driven development? (If so how?)
I'd like to expand on what Dan said.
It's a very common misconception that Scrum / Agile dictates software engineering principles. This is a fallacy for many reasons. As Dan mentioned, Scrum is a software management process, NOT a software engineering process. That being said, very often you will see many engineering principles associated with Scrum; methodologies such as TDD, XP, etc tend to complement the management methodology that Scrum promotes, but are not required.
The reason that CI, TDD, and other engineering practices are so often found hand-in-hand with Scrum is that in general, many are good practices to follow no matter what management methodology is used.
I'd like to address a couple other fallacies in your OP:
However with Scrum the developers are expected:
* to all be able to work on all the bits of the application.
* to only work on one area of the application for a few days at most before
moving to the next area
* to mostly work on code they did not write
As mentioned above, Scrum doesn't dictate what type or kind of work a developer works on. The developers themselves decide on what work to commit themselves to; if a database-heavy dev wants to only work on the DAL and associated stories, there's no reason that they cannot.
Again, Scrum doesn't dictate anything about how to build the application, so your second point is moot (see point 1).
This is a fallacy, since there is nothing that says a developer should only work on code that isn't theirs, or anything about how a developer should develop. If a developer on a Scrum team finds his/herself only working on others' code, that would be coincidental, not because of the scrum process itself.
See this question/answer for more information on the qualities generally expected in a developer working Scrum.
Yes, Scrum describes the software management approach. The program and project management paradigm should not dictate whether or not you use test driven development.
TDD is a software development practice or technique and although it works well with Scrum I don't think it will make or break your success with the practice.
I have personally seen Scrum work well on medium sized projects without a test driven approach to development. That is not to say we didn't write automated tests, they just were not always written first.
Regarless of the use of Scrum, what you were seeing was a change from a code ownership approach to a communal code approach. In order for that to work, there has to be a process change which supports it. One such possibility is TDD. There are others (Automated unit testing even if doesn't drive design coupled with Code Reviews, strong design communication, greater design up front, not developing on code without first pairing with the original author on the code, and more I'm sure you could think of).
Communal approaches work in smaller communities (in large ones it can degenerate into a tragedy of the commons) with a high sense of cohesion between the members.
We do Scrum at work, but we don't practice TDD. Nothing in the Scrum "guidelines" tells you that you have to use TDD. In fact, most agile practices are merely a recommendation insofar that they have proven to work well in agile environments (or even non-agile ones) without them being a must if you want to implement Scrum.
We do write lots and lots of unit and integration tests to avoid extensive manual testing and ensure that later changes in the code doesn't lead to any unpredicted side effects. But that's not TDD. It's mostly a sensible approach to ensure good code and software quality.
Please note that not implementing TDD was not an "active" decision, it just happened. We are encouraged to "write tests first", e.g. when fixing a bug, so it's kind of a voluntary situation-driven not-obligatory way of getting a feeling for TDD by applying it on a regular basis, but it is not mandatory.
Like others said: Scrum is a framework which can hold whatever practices you want to enforce in your development team. Some agile practices come naturally, because they generally make sense, but which ones you want to use and which ones you don't want is up to you.
Yes, Scrum is entirely possible and in most cases implemented without utilizing a TDD approach.
However, the flexibility that TDD provides is certainly something that a Scrum methodology can benefit from.
The main project I work on uses a Scrum approach but the embedded nature of our project makes test-driven development (the way that most people do it) impractical.
I think the problem you encountered was that the expectations of the programmers changed, not that the management process went to a Scrum-style system. If programmers are constantly being shuffled around to parts of the code they are not familiar with, investigate the process behind how tasks are being delegated relative to the old method. Are tasks being assigned to the developer who knows that area the best or are they going to the developer with the shortest to-do list? Is there a long backlog of to-do items for one part of the code and a scarcity of to-do items for another part? If you want to keep developers focused on the areas they excel in, then project management will want to adjust sprint lengths and task priorities to make sure that the workload can be distributed as desired and still be feasible given the time constraints of the sprint.
The Scrum framework is pretty small, it defines a few meetings, ideal lengths of iteration, responsibilities of product owner, scrum master... and maybe a bit more.
However, once we have started our iteration there is nothing in Scrum that dictates how and when a developer(s) should develop something. There is only the committment it will be 'done' by the end of the sprint.
Scrum is about the team committing to produce results, and the team being empowered to decide how to do so. If that means 1 dev per user story, great. If that means 3 devs per user story thats fine also. Whatever way the scrum team believe to be the best way to do the job is what should happen.
To answer the question, yes Scrum is possible without test driven development.
TDD is a worthwhile/recommended practice but the results will vary depending on team/context. For example was TDD in place at the start of a project or are you trying to inject the methodology in at a later date.
There's some confusion about Scrum here.
Scrum per se doesn't tell you how/when to do technical things like TDD (that's a forever moving target). Scrum tells you how/when to manage the people things that happen on a project. It is a overall project management technique, not a construction management technique.
If your manager wishes to do the three things listed above during your sprints, that's fine, but that isn't part of the framework of Scrum. Those are for construction management, which isn't Scrum. It may be used in your Scrum-framework-bounded project, but it isn't in the official Scrum framework.
I think it's easy to be confused about this, though. Agile techniques like Scrum are usually evangelized by people on the 'net who are all about pushing buzzwords and 'happy shiny' things, and as such don't always understand/communicate well. At least, that's how Agile techniques were introduced to me, by an Agile apologist. It took me a good 6 months before I got past the hype / confusing terminology and figured out what they were talking about.
While there are parts I agree to above, I truly believe that you cannot deliver small increments of Code (Scrum for software) without Testing period.
How do you know your sprint didn't break the last 4? How can you guarantee deliverables if you don't know that you didn't break anything in the past.
While I do agree SCRUM is a management process and that TDD is a software process. You must have some way to be able to verify that you did not move backwards.
SCRUM teaches you that daily deliverables and always moving forward at the expense of speed.
When someone says I want to do CI or Agile or Scrum, to me this automatically means there needs to be unit testing (not integration testing as mentioned above) but rather each individual moving part has it's own unit tested.
If you do an integration test you are not testing each individual part in a self contained way. Therefore all you prove is that the method called in one flow works rather than each possible branch you would see in the MSIL

Should developers work in sandboxes?

If developers perform unit testing in their development environment before checking in to source control should that environment (including test failures) be shared?
Should all builds be public?
I think it´s impractical to make developer builds public. You do not want to bother your team members with every build failure (unit test failure) you encounter.
You are always in the process of creating a solution for some problem and chances are you won't get it right the first time so unit test failures will happen often. Especially if you take a test-driven approach to developing your code: writing your unit test first and implement functionality so it will not fail anymore.
I think working in a sandbox is a good idea. It has saved me a few times. I usually have a few different virtual machines floating around that I use for development and if I mess it up real bad I don't have to wait for my machine to be rebuilt.
I don't think all test results from simple developer builds should be made public. I'm not really worried about hurting someone's feelings by having all their failures public necessarily but I worry that the information they provide isn't useful.
It would be interesting to investigate some type of system whereby the developer is required to submit passing test results when they checkin but I think even that would be pushing things. It may have the detrimental effect of hurting productivity. Developers have enough non-coding stuff to do already.
Children should play in sandboxes ;), software developers should play on their own PC and commit their code whenever they feel it meets a certain quality level. When everybody commits and updates small and tested pieces of code regularly then my experience is that no serious problems occur, only constructive feedback and sometimes somebody shouts something. Finally releasing the software to the public/customers is a different story. That takes extensive testing, writing release notes, updating manuals, marketing, etc.
Yes developers should work in sandboxes if possible. No builds should not all be public by default. TDD will lead to multiple failures and refinements to both tests and code. Sharing builds publically may be bothersome but certainly other developers should be able to see what someone is up to if they cared enough to go and look. They should be made public when asked to do so. If you are asking for proof that they tested something the running their unit tests after they check the code in should be proof enough.
Giving developers the environment, tools, and freedom to test changes liberaly will improve the stability and quality of your software. Testing theories and trouble shooting often require small incremental builds. If the sandbox is expensive they should be required to reserve time for using it. Giving each developer a private sandbox could result in their code branching for long periods of time. What is your motivation in asking this? If the developer is trying to hide something then get to the root cause of that issue. If you are trying to control costs then consider the reservation model.
What benefit do you see in this? In particular, it means everyone gets email about every picayune test failure by every developer.
This merely serves to distract everyone.
Avoid the temptation to do something just because it's possible to do; let your requirements drive your process. Don't create new process just because you can.