I have developed module with sql script upgrade and after that my phpunit tests got not success, these break in place of call new attribute for customer.
I have investigated this case and concluded that new updates has applied after manual clean cache on the administrator page but don't after console command like zf clear mage-core-cache and etc. I think that runs anything else except cleaning cache...
Anybody know how to run check upgrade scripts like mysql4-upgrade-1.0.0-1.0.1.php and apply these programmatically ? I need that for my phpunit tests. thx
From memory its Mage_Core_Model_Resource_Setup::applyAllUpdates(), but that relies on Mage_Core_Model_Config being initialized at least. It works through Mage::run() workflow.
Related
I am making some integration tests for an app, testing routes that modify the database. So far, I have added some code to my tests to delete all the changes I have made to the DB because I don't want to change it, but it adds a lot of work and doesn't sounds right. I then thought about copying the database, testing, deleting the database in my testing script. The problem with that is that it is too long to do. Is there a method for doing that ?
I see two possible ways to solve your problem:
In-memory database e.g. (h2)
Database in docker container.
Both approaches solve your problem, you can just shutdown db/container and run it again, db will be clean in that case and you don't have to care about it. Just run new one. However there are some peculiarities:
In-memory is easier to implement and use, but it may have problems with dialects, e.g. some oracle sql commands are not available for H2. And eventually you are running your tests on different DB
Docker container with db is harder to plugin into your build and tests, but it doesn't have embeded DB problems with dialects and DB in docker is the same as your real one.
You can start a database transaction at the beginning of a test and then roll it back. See the following post for details:
https://lostechies.com/jimmybogard/2012/10/18/isolating-database-data-in-integration-tests/
let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.
Is there a way to tell fabric to just print the commands it will execute instead of actually execute them?
I'm preparing an installation script and if it fails I'll have to uninstall the steps previous to error occurrence.
I've checked the "fab" command parameters but found nothing about this.
Thank you for your help.
There are tickets (including issue 26) open on github that request such a feature. The challenge described in that thread is that you can't always be certain what the script would do - ie/ some behaviour may change depending on the state of the remote server.
As an alternative, you could look at reproducing your environment in a virtual machine (vagrant makes doing this really easy), and testing to see whether your scripts run as expected there.
If you're really concerned about this, a configuration management system (particularly one that can reverse changes) like puppet or chef may make more sense.
I have a team build (upgrade template, tfs2010, msbuild) compiling and testing a WCF service. We use psexec and the exec task to remote install the service (wix installer) on the web server, prior to running an integration test suite against it. However sometimes our nightly build fails with a compilation error - can only see the first 1024 bytes and most of it is css styles. I've tried to delay the tests with sleeps, thinking it might be due to long JIT. However all 600+ integration tests fails. In the build log it seems that the exec task with psexec executes synchronously, as expected, and returns an exit 0. Could anyone come up with a reason to why this occurs now and then?
This doesn't sound like it's anything specific to TFS, msbuild or psexec -- it sounds like there may be an interim installation, configuration, or coding problem with the service. The point of CI and integration tests is to get early feedback on your process, and apparently something is wrong. The trick is drilling down into the problem and ruling out where the issue(s) reside.
Psexec claims the WiX deployment went fine, but did it? Are all files present? Were previous versions of the installation properly removed, or did they get upgraded correctly?
All 600 tests fail, but the tests don't contain the proper stack trace - can you reproduce the problem outside of the tests? Eg, when the tests fail can you emulate a test manually or run one of the existing tests with a debugger attached to see the same stack trace? One strategy might be to identify one or two specific tests that accurately validate the deployment -- run just these tests after the deployment and if these tests fail you should abort the build and then leave the server in a failed state for deeper analysis. this may require customization the build template but it may be worth the effort.
Can you add logging to the WCF service? Better logging to the tests?
Lastly, CI as mentioned previously is about early feedback. The general rule is, "If something is painful then you should do it more often." Focus on pain points, isolate them and iteratively improve them. When the pain diminishes, focus on other pain points. In your case, consider running your "nightly" build in a rolling fashion - you'll find your intermittent problem after a few runs rather than weeks.
I want to test my database as part of a set of integration tests. I've got all my code unit tested against mocks etc for speed but I need to make sure all the stored procedures and code is working as it should when persisting. I did some Googling yesterday and found a nice article here http://msdn.microsoft.com/en-us/magazine/cc163772.aspx but it seemed a little old. I wondered if there is any current 'better' way of clearing out the database, restoring to an expected state or rolling back ready for each test? I'm coding in c#4, mvc3 using sql 2008.
We are using DbUnit to set up and/or tear down the database between tests as well as to assert database state during test.
It's stupid-simple, so it may not be exactly what you need, but what I've done is keep a backup of the database at a given sane state - usually what the current production database is it. Then, for each build we restore that database (using Jenkins, NANT and SQLCMD), apply the current builds update scripts and run our test suite. This has the advantage of both giving you a database that is a 'known quantity' and it verifies that your upgrade scripting is working.