I wrote a function that takes a list of proto.Message objects. Looking at the documentation, it seems like proto.Message wraps protoreflect.ProtoMessage which contains a single function, ProtoReflect() Message. Looking at the documentation for Message, it implements a number of other functions that return types referenced by the protoreflect package.
It seems that attempting to create a mock proto.Message would be a lot more work that it's worth but I don't want to go through the whole process of creating a protobuf file, compiling it and referencing it just for unit testing.
Is there another way I can create a mock proto.Message object?
After looking into this further, I decided that the best thing to do would be to create an actual protobuf message:
syntax = "proto3";
package my.package;
option go_package = "path/to/my/package"; // golang
message TestRecord {
string key = 1;
string value = 2;
}
and compile it into a Golang file and then modify it for my own purposes:
$ protoc --go_out=./gopb/ --go_opt=paths=source_relative *.proto
Once the test file has been created, I can delete it or save it as a record.
Related
I've been working with FlatBuffers as a solution for various things in my project, one of them specifically being JSON support. However, while FB natively supports JSON generation, the documentation for flatbuffers is poor, and the process is somewhat cumbersome. Right now, I am working in the Object->JSON direction. The issue I am having doesn't really arise the other way around (I think).
I currently have JSON generation working per an example I found here (line 630, JsonEnumsTest()) - by parsing a .fbs file into a flattbuffers::Parser, building and packaging my flatbuffer object, then running GenerateText() to generate a JSON string. The code I have is simpler than the example in test.cpp, and looks vaguely like this:
bool MyFBSchemaWrapper::asJson(std::string& jsonOutput)
{
//**This is the section I don't like having to do
std::string schemaFile;
if (flatbuffers::LoadFile((std::string(getenv("FBS_FILE_PATH")) + "MyFBSchema.fbs").c_str(), false, &schemaFile))
{
flatbuffers::Parser parser;
const char *includePaths[] = { getenv("FBS_FILE_PATH");
parser.Parse(schemaFile.c_str(), includePaths);
//**End bad section
parser.opts.strict_json = true;
flatbuffers::FlatBufferBuilder fbBuilder;
auto testItem1 = fbBuilder.CreateString("test1");
auto testItem2 = fbBuilder.CreateString("test2");
MyFBSchemaBuilder myBuilder(fbBuilder);
myBuilder.add_item1(testItem1);
myBuilder.add_item2(testItem2);
FinishMyFBSchemaBuffer(fbBuilder, myBuilder.finish());
auto result = GenerateText(parser, fbBuilder.GetBufferPointer(), &jsonOutput);
return true;
}
return false;
}
Here's my issue: I'd like to avoid having to include the .fbs files to set up my Parser. I don't want to clutter an already large monolithic program by adding even more random folders, directories, environment variables, etc. I'd like to be able to generate JSON from the compiled FlatBuffer schemas, and not have to search for a file to do so.
Is there a way for me to avoid having to read back in my .fbs schemas into the parser? My intuition is pointing to no, but the lack of documentation and community support on the topic of FlatBuffers & JSON is telling me there might be a way. I'm hoping that there's a way to use the already generated MyFBSchema_generated.h to create a JSON string.
Yes, see Mini Reflection in the documentation: http://google.github.io/flatbuffers/flatbuffers_guide_use_cpp.html
I'm trying to create my RKEntityMapping outside of my UnitTest. The problem I have is it only works if I create it inside my test. For example, this works:
RKEntityMapping *accountListMapping = [RKEntityMapping mappingForEntityForName:#"CustomerListResponse" inManagedObjectStore:_sut.managedObjectStore];
[accountListMapping addAttributeMappingsFromDictionary:#{#"count": #"pageCount",
#"page": #"currentPage",
#"pages": #"pages"}];
While the following does now work. The all to accoutListMapping returns exactly what is shown above using the same managed object store:
RKEntityMapping *accountListMapping = [_sut accountListMapping];
When the RKEntityMapping is created in _sut I get this error:
<unknown>:0: error: -[SBAccountTests testAccountListFetch] : 0x9e9cd10: failed with error:
Error Domain=org.restkit.RestKit.ErrorDomain Code=1007 "Cannot perform a mapping operation
with a nil destination object." UserInfo=0x8c64490 {NSLocalizedDescription=Cannot perform
a mapping operation with a nil destination object.}
I'm assuming the nil destination object it is referring to is destinationObject:nil.
RKMappingTest *maptest = [RKMappingTest testForMapping:accountListMapping
sourceObject:_parsedJSON
destinationObject:nil];
Make sure that the file you have created has a target membership of both your main target, and your test target. You can find this by:
clicking on the .m file of your class
open the utilities toolbar (the one on the right)
in the target membership section tick both targets.
This is because if your class does not have target membership to your test target, the test target actually creates a copy of the class that you have created, meaning it has a different binary file to the main target. This leads to that class using the test's version of the RestKit binary, rather than the main projects RestKit. This will lead to the isKindOfClass method failing when it tries to see if the mapping you have passed is of type RKObjectMapping from the main project, because it is of type RKObjectMapping from the test projects version of RestKit, so your mapping doesn't get used, and you get your crash.
At least this is my understanding of how the LLVM compiler works. I'm new to iOS dev so please feel free to correct if I got something wrong.
This problem may also be caused by duplicated class definitions, when including RestKit components for multiple targets individually when using Cocoapods.
For more information on this have a look at this answer.
I used a category on the Mapped object for example
RestKitMappings+SomeClass
+ (RKObjectMapping*)responsemappings {
return mappings;
}
now this category has to be included in the test target as well otherwise the mapping will not be passed.
When you're running a test you aren't using the entire mapping infrastructure, so RestKit isn't going to create a destination object for you. It's only going to test the mapping itself. So you need to provide all three pieces of information to the test method or it can't work.
How can I create a Java or Javascript JSON webservice to retrieve data from a simple properties file? My intention is to uses this as a global property storage for a Jenkins instance that runs many Unit tests. The master property file also needs to be capable of being manually edited and stored in source control.
I am just wondering what method people would recommend that would be the easiest for a junior level programmer like me. I need read capability at miniumum but, and if its not too hard, write capability also. Therefore, that means it is not required to be REST.
If something like this already exists in Java or Groovy, a link to that resource would be appreciated. I am a SoapUI expert but I am unsure if a mock service could do this sort of thing.
I found something like this in Ruby but I could not get it to work as I am not a Ruby programmer at all.
There are a multitude of Java REST frameworks, but I'm most familiar with Jersey so here's a Groovy script that gives a simple read capability to a properties file.
#Grapes([
#Grab(group='org.glassfish.jersey.containers', module='jersey-container-grizzly2-http', version='2.0'),
#Grab(group='org.glassfish.jersey.core', module='jersey-server', version='2.0'),
#Grab(group='org.glassfish.jersey.media', module='jersey-media-json-jackson', version='2.0')
])
import org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory
import org.glassfish.jersey.jackson.JacksonFeature
import javax.ws.rs.GET
import javax.ws.rs.Path
import javax.ws.rs.Produces
#Path("properties")
class PropertiesResource {
#GET
#Produces("application/json")
Properties get() {
new File("test.properties").withReader { Reader reader ->
Properties p = new Properties()
p.load(reader)
return p
}
}
}
def rc = new org.glassfish.jersey.server.ResourceConfig(PropertiesResource, JacksonFeature);
GrizzlyHttpServerFactory.createHttpServer('http://localhost:8080/'.toURI(), rc).start()
System.console().readLine("Press any key to exit...")
Unfortunately, since Jersey uses the 3.1 version of the asm library, there are conflicts with Groovy's 4.0 version of asm unless you run the script using the groovy-all embeddable jar (it won't work by just calling groovy on the command-line and passing the script). I also had to supply an Apache Ivy dependency. (Hopefully the Groovy team will resolve these in the next release--the asm one in particular has caused me grief in the past.) So you can call it like this (supply the full paths to the classpath jars):
java -cp ivy-2.2.0.jar:groovy-all-2.1.6.jar groovy.lang.GroovyShell restProperties.groovy
All you have to do is create a properties file named test.properties, then copy the above script into a file named restProperties.groovy, then run via the above command line. Then you can run the following in Unix to try it out.
curl http://localhost:8080/properties
And it will return a JSON map of your properties file.
At my server, we receive Self Described Messages (as defined here... which btw wasn't all that easy as there aren't any 'good' examples of this in c++).
At this point I am having no issue creating messages from these self-described ones. I can take the FileDescriptorSet, go through each FileDescriptorProto, adding each to a DescriptorPool (using BuildFile, which also gives me every defined FileDescriptor).
From here I can create any of the messages which were defined in the FileDescriptorSet with a DynamicMessageFactory instanced with the DP and calling GetPrototype (which is very easy to do as our SelfDescribedMessage required the messages full_name() and thus we can call the FindMessageTypeByName method of the DP, giving us the properly encoded Message Prototype).
The question is how can I take each already defined Descriptor or message and dynamically BUILD a 'master' message that contains all of the defined messages as nested messages. This would primarily be used for saving the current state of the messages. Currently we're handling this by just instancing a type of each message in the server(to keep a central state across different programs). But when we want to 'save off' the current state, we're forced to stream them to disk as defined here. They're streamed one message at a time (with a size prefix). We'd like to have ONE message (one to rule them all) instead of the steady stream of separate messages. This can be used for other things once it is worked out (network based shared state with optimized and easy serialization)
Since we already have the cross-linked and defined Descriptors, one would think there would be an easy way to build 'new' messages from those already defined ones. So far the solution has alluded us. We've tried creating our own DescriptorProto and adding new fields of the type from our already defined Descriptors but got lost (haven't deep dived into this one yet). We've also looked at possibly adding them as extensions (unknown at this time how to do so). Do we need to create our own DescriptorDatabase (also unknown at this time how to do so)?
Any insights?
Linked example source on BitBucket.
Hopefully this explanation will help.
I am attempting to dynamically build a Message from a set of already defined Messages. The set of already defined messages are created by using the "self-described" method explained(briefly) in the official c++ protobuf tutorial (i.e. these messages not available in compiled form). This newly defined message will need to be created at runtime.
Have tried using the straight Descriptors for each message and attempted to build a FileDescriptorProto. Have tried looking at the DatabaseDescriptor methods. Both with no luck. Currently attempting to add these defined messages as an extension to another message (even tho at compile time those defined messages, and their 'descriptor-set' were not classified as extending anything) which is where the example code starts.
you need a protobuf::DynamicMessageFactory:
{
using namespace google;
protobuf::DynamicMessageFactory dmf;
protobuf::Message* actual_msg = dmf.GetPrototype(some_desc)->New();
const protobuf::Reflection* refl = actual_msg->GetReflection();
const protobuf::FieldDescriptor* fd = trip_desc->FindFieldByName("someField");
refl->SetString(actual_msg, fd, "whee");
...
cout << actual_msg->DebugString() << endl;
}
I was able to solve this problem by dynamically creating a .proto file and loading it with an Importer.
The only requirement is for each client to either send across its proto file (only needed at init... not during full execution). The server then saves each proto file to a temp directory. An alternative if possible is to just point the server to a central location that holds all of the needed proto files.
This was done by first using a DiskSourceTree to map actual path locations to in program virtual ones. Then building the .proto file to import every proto file that was sent across AND define an optional field in a 'master message'.
After the master.proto has been saved to disk, i Import it with the Importer. Now using the Importers DescriptorPool and a DynamicMessageFactory, I'm able to reliably generate the whole message under one message. I will be putting an example of what I am describing up later on tonight or tomorrow.
If anyone has any suggestions on how to make this process better or how to do it different, please say so.
I will be leaving this question unanswered up until the bounty is about to expire just in case someone else has a better solution.
What about serializing all the messages into strings, and making the master message a sequence of (byte) strings, a la
message MessageSet
{
required FileDescriptorSet proto_files = 1;
repeated bytes serialized_sub_message = 2;
}
I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.