I created a unit test in which I dynamically create and then parse an xml. When I'm finished with the file I delete it. I'm storing the file momentarily in a created resource folder within my project,but I want to know will this still pass if I deploy to a tomcat server. I'm using getRealPath () right now and it works. I in no way need these files later on which is why I'm deleting them.
I've read the getRealPath () isn't portable and shouldn't really be used but that's why I'm asking for my purpose would it be ok?
I can't post code because I'm at work but I'll try to explain somewhat:
I use ServletContextHolder.servletcontext.getRealPath () and add resources/testfiles to the end..this takes me to my project path (project/out/test/resources/testfiles)
I create an xml file using stringwriter,filewriter,markupbuilder..
I save this file, read it and delete it after the test..it works on windows but I need to know if it'll work on tomcat if it is deployed and the unit tests run automatically..will it be able to do all this..
Apologies for poor format my phone isn't the best way to write this
The ideal solution would be to not use files at all. But it really depends on what class your XML parser uses for its input. For example, if the parser accepts an InputStream, you can use a StringBufferInputStream to build your XML content. The parser would then be able to use that stream as if it were a file. OOP interfaces are awesome like that :)
Is it possible to convert a Sketch file to an SVG without actually having Sketch or DrawIt? I know it's theoretically possible since they're both vector, but I use Windows and Linux, so I don't have a Mac to open the files with.
Figma is currently free for anyone to sign up and supports both Sketch import and SVG export. It's a fully-featured design tool that's browser-based so it should work on all three major platforms (OS X, Windows, and Linux). File import from Sketch isn't a perfect 1-to-1 mapping with Figma because both apps have slightly different feature sets, but it should get you 99% of the way there.
There's no "magic" about "Sketch using a lot of technology that is exclusive to OS X" or not. That statement on Sketch makers' website is in the context that they do not intend to make Sketch app available on linux and on windows.
The real problem you have is as follows:-
The sketch file is an sqlite3 file and in it there are (currently at this time of writing) two tables, meta and payload. The payload table is a single key-value store, storing main and and BLOB value. So that's where you get stuck - you will have to figure out how to reverse engineer the BLOB if you do not have a Sketch program.
On the other hand, if you do have a Sketch program, you do not need to reverse engineer anything, you can query information from your Sketch file using Sketch Plugin APIs, which are well documented here - http://bohemiancoding.com/sketch/support/developer/ - which will allow you to automate a lot of the tasks if you have a complicated design workflow. http://zeplin.io is an example of a simple Sketch plugin that pulls out relevant "spec" information for a developer from a designer-created sketch artboard.
But back to your original question, Sketch itself allows you to export SVG files but that assumes you have the Sketch app. Long story short, unless you reverse engineer the BLOB binary in the sketch file (or use a tool that someone else created that can), you can't programmatically translate sketch files into SVG files without having a Sketch app.
I have seen some posts that mention the xmlserializer being called at runtime in .Net.
I have a sharepoint web-part that calls a webservice to retrieve data, and then is supposed to display that data on the web-part. But I get this error:
System.Runtime.InteropServices.ExternalException: Cannot execute a program. The command being executed was "C:\Windows\Microsoft.NET\Framework64\v2.0.50727\csc.exe" /noconfig /fullpaths #"C:\Users\my_deploy_spFarm_user\AppData\Local\Temp\OICE_356C17F3-2ED2-423C-8BBE-CA5C05740FD7.0\eelwfhnn.cmdline
Now the posts I have read here, state that the problem is that the compiler is trying to to create an XML serialization assembly on the fly, but does not have privilege to do so.
I have seen some suggestions to use the post-build events to create this XML Serialization Assembly at Compile-time. However I am not sure of how to do that, and also I am not sure if this assemply would get included in the .wsp package?
I'd take a good look at whether you really want the full, automatically generated serializer, or whether you just want to emit/parse some relatively straightforward XML - if the latter, you'll solve this problem by not using stuff that needs generated code, i.e. use the XmlReader/XmlWriter directly.
This link has the basic command to create the pre-compiled serializers.
I have a schema (xsd), and I want to create xml files that conform to it.
I've found code generators that generate classes which can be loaded from an xml file (CodeSynthesis). But I'm looking to go the other direction.
I want to generate code that will let me build an object which can easily be written out as an xml file. In C++. I might be able to use Java for this, but C++ would be preferable. I'm on solaris, so a VisualStudio plugin won't help me (such as xsd2code).
Is there a code generator that lets me do this?
To close this out: I did wind up using CodeSynthesis. It worked very well, as long as I used a single xsd as its source. Since I actually had two xsds (one imported the other), I had to manually merge them (they did some weird inheritance that needed manual massaging).
But yes, Code Synthesis was the way to go.
I'm working on a C++ library that (among other stuff) has functions to read config files; and I want to add tests for this. So far, this has lead me to create lots of valid and invalid config files, each with only a few lines that test one specific functionality. But it has now got very unwieldy, as there are so many files, and also lots of small C++ test apps. Somehow this seems wrong to me :-) so do you have hints how to organise all these tests, the test apps, and the test data?
Note: the library's public API itself is not easily testable (it requires a config file as parameter). The juicy, bug-prone methods for actually reading and interpreting config values are private, so I don't see a way to test them directly?
So: would you stick with testing against real files; and if so, how would you organise all these files and apps so that they are still maintainable?
Perhaps the library could accept some kind of stream input, so you could pass in a string-like object and avoid all the input files? Or depending on the type of configuration, you could provide "get/setAttribute()" functions to directly, publicy, fiddle the parameters. If that is not really a design goal, then never mind. Data-driven unit tests are frowned upon in some places, but it is definitely better than nothing! I would probably lay out the code like this:
project/
src/
tests/
test1/
input/
test2
input/
In each testN directory you would have a cpp file associated to the config files in the input directory.
Then, assuming you are using an xUnit-style test library (cppunit, googletest, unittest++, or whatever) you can add various testXXX() functions to a single class to test out associated groups of functionality. That way you could cut out part of the lots-of-little-programs problem by grouping at least some tests together.
The only problem with this is if the library expects the config file to be called something specific, or to be in a specific place. That shouldn't be the case, but if it is would have to be worked around by copying your test file to the expected location.
And don't worry about lots of tests cluttering your project up, if they are tucked away in a tests directory then they won't bother anyone.
Part 1.
As Richard suggested, I'd take a look at the CPPUnit test framework. That will drive the location of your test framework to a certain extent.
Your tests could be in a parallel directory located at a high-level, as per Richard's example, or in test subdirectories or test directories parallel with the area you want to test.
Either way, please be consistent in the directory structure across the project! Especially in the case of tests being contained in a single high-level directory.
There's nothing worse than having to maintain a mental mapping of source code in a location such as:
/project/src/component_a/piece_2/this_bit
and having the test(s) located somewhere such as:
/project/test/the_first_components/connection_tests/test_a
And I've worked on projects where someone did that!
What a waste of wetware cycles! 8-O Talk about violating the Alexander's concept of Quality Without a Name.
Much better is having your tests consistently located w.r.t. location of the source code under test:
/project/test/component_a/piece_2/this_bit/test_a
Part 2
As for the API config files, make local copies of a reference config in each local test area as a part of the test env. setup that is run before executing a test. Don't sprinkle copies of config's (or data) all through your test tree.
HTH.
cheers,
Rob
BTW Really glad to see you asking this now when setting things up!
In some tests I have done, I have actually used the test code to write the configuration files and then delete them after the test had made use of the file. It pads out the code somewhat and I have no idea if it is good practice, but it worked. If you happen to be using boost, then its filesystem module is useful for creating directories, navigating directories, and removing the files.
I agree with what #Richard Quirk said, but also you might want to make your test suite class a friend of the class you're testing and test its private functions.
For things like this I always have a small utility class that will load a config into a memory buffer and from there it gets fed into the actually config class. This means the real source doesn't matter - it could be a file or a db. For the unit-test it is hard coded one in a std::string that is then passed to the class for testing. You can simulate currup!pte3d data easily for testing failure paths.
I use UnitTest++. I have the tests as part of the src tree. So:
solution/project1/src <-- source code
solution/project1/src/tests <-- unit test code
solution/project2/src <-- source code
solution/project2/src/tests <-- unit test code
Assuming that you have control over the design of the library, I would expect that you'd be able to refactor such that you separate the concerns of actual file reading from interpreting it as a configuration file:
class FileReader reads the file and produces a input stream,
class ConfigFileInterpreter validates/interprets etc. the contents of the input stream
Now to test FileReader you'd need a very small number of actual files (empty, binary, plain text etc.), and for ConfigFileInterpreter you would use a stub of the FileReader class that returns an input stream to read from. Now you can prepare all your various config situations as strings and you would not have to read so many files.
You will not find a unit testing framework worse than CppUnit. Seriously, anybody who recommends CppUnit has not really taken a look at any of the competing frameworks.
So yes, go for a unit testing franework, but do not use CppUnit.