Context
I am developing a web application that avails of a suite of REST tests built for Postman.
The idea is that you can run the tests manually with Postman as a REST client against the application runtime, and the Maven pom configures newman to run them automatically in the CI pipelines as integration tests when ever a build is triggered.
This has run fairly well in the past.
Requirement
However, due to an overhaul in business logic, many of those tests now require a binary body as a file resource in POST requests (mostly zip archives).
I need those tests to work in 3 scenarios:
Manually when running individual tests with Postman locally, against a runtime of the web application
Semi-manually as above, but by triggering a runner in Postman
Automatically, when newman is started by maven during the integration tests phase of our pipeline
In order to make sure the path to the file in each request would work regardless of the way the tests are run, I have added a Postman environment variable in each profile. The variable would then be used by the collection in the relevant requests, e.g.:
"body": {
"mode": "file",
"file": {
"src": "{{postman_resources_path}}/empty.zip"
}
},
The idea would be that:
locally, you manually overwrite the value of postman_resources_path in the profile, in order to point to an absolute path in your machine (e.g. simply where you have the resources in source control) - this would then resolve it both for manual tests and a local runner
for the CI pipelines, the same would apply with a default value pointing to a path relative to the --working-dir value, which would be set in the newman command-line parametrization in the exec-maven-plugin already in use to run newman
Problem
While I haven't had a chance to test the pipeline yet with those assumptions, I can already notice that this isn't working locally.
Looking at a request, I can see the environment variable is not being resolved:
Conversely, here is the value that I manually set in the profile I'm running the request against:
TL;DR The request fails, since the resource is not found.
The most relevant literature I've found does not address my use case entirely, but the solution given seems to follow a similar direction: "variabilize" the path - see here.
I could not find anything specific enough in the Postman reference.
I think I'm onto something here, but I won't accept my own answer yet.
TL;DR it may be simpler than it seemed initially.
This Postman doc page states:
When you send a form-data or binary file with a request body, Postman saves a path to the file as part of the collection. The file path is relative to your working directory.
If I modify the raw collection json to ensure only the file name (or any relative path) is the value of the "src" key in the file definition, and set up the working directory manually in my Postman client, it seems to resolve the file correctly --> no need for (non-working) variables in the file path.
The working directory setting does not seem to be saved in the collection, meaning a manual one-time setup for local clients and the usage of --working-dir with newman should do the trick altogether.
Will self-accept once I've successfully tested with newman.
Related
I have different postman collections per project repository, and every time I switch collections to run some tests they break because they depend on resources (files, images sent as part of request bodies) versioned separately in different repositories (you can see each repository as a different working directory).
At the Postman General settings I can change the Working dir Location by specifying a path, however I have to update this path for the specific collection repository dir everytime I want to run a collection.
Is there a way to set a specific Working Dir for each collection (or workspace) basis?
Another option could be to specify resources relative to the collection file (but not by setting a absolute path as each developer might check out the code in different places=. The idea would be that I don't need to change the location every time for each request. Is there a way to do that?
I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.
Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre
You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)
I'm using ember-cli 0.0.28 which depends BroccoliJS to build the distributable source for my front-end application. The problem I'm having is that whenever I (re)build I need the index.html file to be copied (or rather moved) to my back-end's template directory from which I serve the application.
I can't figure out how to configure the Brocfile.js in the ember-cli project directory to do this after the build is complete.
I've used a symlink for the time being, which works but would be a dead link until the front-end application is built with ember build. I think it's possible to use grunt-broccoli to run the build as a grunt task?! I don't know if this is the way forward though.
Using broccoli-file-mover is easy enough but it works with current trees, not future trees!
All help is appreciated.
ember-cli has progressed quite a bit but this question is still, fundamentally, valid and there are myriad of ways to address it.
If the front-end build is to be bundled with the back-end assets, a symlink from the build/dist directory to the back-end's assets directory is adequate for most development phases.
Now, ember-cli also allows proxying to the back-end via the ember server command which is useful if building an API backed app, sort of thing.
ember-cli-deploy also appears as an excellent way to deploy front-end apps which can help with deploying to a development or production environment. It has many packs but I've reverted to using the redis pack as it provides an easy way to checkout font-end revision with a small back-end tweak, like this:
defmodule PageController do
def index(conn, %{"index_id" => sha}) do
case _fetch_page_string(sha) do
{:ok, output_string} -> html(conn, output_string)
{:error, reason} -> conn |> send_resp(404, reason)
end
end
defp _fetch_page_string(sha) do
# some code to fetch page string (content)
...
end
end
In the above index page handler, attempt to catch an index_id queryParam, if present, we look for the corresponding page string that can be checked into e.g., a key/value store.
We are using Codeigniter and have 2 options to call our API controllers:
we can use a client that calls the controller's url through Curl,
we can use a client that calls the controller from the command line.
This is perfectly fine for the functionality of our site. However, when I run PHPUnit, the coverage reports for the Controllers are blank while the coverage reports for all Models are correct.
In tracing how xdebug creates the reports, it appears that using the Curl-based client or the CLI client are called outside of the scope of the test function, so xdebug_get_code_coverage() does not track the controller code that is executed.
Is it possible to configure xdebug to recognize code coverage in this scenario? Is it possible to call Codeigniter controllers within the scope of the PHPUnit test function? Any other possible solutions?
Yes, that's easily possible. See http://www.phpunit.de/manual/current/en/selenium.html for more information about it
Basically you put some special files in your web root:
PHPUnit_Extensions_SeleniumTestCase can collect code coverage information for tests run through Selenium:
Copy PHPUnit/Extensions/SeleniumTestCase/phpunit_coverage.php into your webserver's document root directory.
In your webserver's php.ini configuration file, configure PHPUnit/Extensions/SeleniumTestCase/prepend.php and PHPUnit/Extensions/SeleniumTestCase/append.php as the auto_prepend_file and auto_append_file, respectively.
In your test case class that extends PHPUnit_Extensions_SeleniumTestCase, use protected
$coverageScriptUrl = 'http://host/phpunit_coverage.php';
to configure the URL for the phpunit_coverage.php script.
When running a URL with the GET parameter PHPUNIT_SELENIUM_TEST_ID, the coverage information gets tracked and PHPUnit can collect it by requesting the coverageScriptUrl.
An alternative: see our SD PHP Test Coverage tool.
It doesn't use xdebug to collect coverage data, so it won't have xdebug's specific problems. It instruments a script to collect test coverage data; once instrumented, no matter how the script is executed, you will get test coverage data.
(The instrumentation is temporary; you throw the instrumented code away once you have
the test coverage data collected, so it doesn't affect your production code base).
This approach does require you to explicitly list all PHP scripts for which you want coverage data; you can ignore some if you want. Usually it isn't worth the bother; most of the users simply list all PHP scripts.
I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.