Working on making custom IVisual implementations; The recommended pattern includes a converter method, which converts the dataview into the visual's own view model. I am curious why the converter is declared as public instead of private.
In the Hello World example it is coded here, and explained here.
public static converter(dataView: DataView): HelloViewModel {
...
}
In code, converter seems to only be accessed within the class itself, so it is naturally a private method. Moreover, making it public necessitates also exporting its type, HelloViewModel, which also seems to only be used internally.
Possible answer: There are a handful of built-in visuals that ship with their own test classes, like treemapTests.ts for treemap.ts. These classes also test the functionality of converter methods, and this is the only place where I have seen converter called from outside of its class.
Is this the entire reason converter methods have been made public, or is there a plan to make them a formal part of the IVisual interface in the future, or is there something else going on?
Great question :) No reason. Originally there was talk to change the update options to contain the visual's vm instead of the dataview. And power BI would use the public converter method to pass in the right vm. This way other sites hosting power bi visuals wouldn't need any dependency on dateview. I don't think we'll be going that route though.
Related
I have two questions related to coder issues I am facing with my Dataflow pipeline.
How do I go about setting a coder for my custom data types? The class consists of just three items - two doubles and another parameterized property. I tried annotating the type with SerializableCoder but I still end up with the error "com.google.cloud.dataflow.sdk.coders.CannotProvideCoderException: Cannot provide coder based on value with class interface java.util.Set: No CoderFactory has been registered for the class." The Set actually contains the parameterized custom data-type - so I am assuming that the custom datatype is the problem. I could not find enough documentation/examples on the right way to do this. Please point me to the right place if its available.
Even without the custom datatype, whenever I try switching to a parameterized version of Transform functions, it results in coder errors. Specifically, inside a complex transform which is parameterized, a ParDo works with parameterized types but when I apply a Combine.PerKey on the resulting PCollection after the ParDo, it results in the CoderNotFoundException.
Any help regarding these two items would be helpful as I am kind of stuck on this for sometime now.
It looks like you have been bitten by two issues. Thanks for bringing them to our attention! Fortunately, there are easy workarounds for both while we improve things.
The first issue is that the default coder registry does not have an entry for mapping Set.class to SetCoder. We have filed GitHub issue #56 to track its resolution. In the meantime, you can use the following code to perform the needed registration:
pipeline.getCoderRegistry().registerCoder(Set.class, SetCoder.class);
The second issue is that parameterized types currently require advanced treatment in the coder registry, so the #DefaultCoder will not be honored. We have filed Github issue #57 to track this. The best way to ensure that SerializableCoder is used everywhere for CustomType is to register a CoderFactory for your type that will return a SerializableCoder. Supposing your type is something like this:
public class CustomType<T extends Serializable> implements Serializable {
T field;
}
Then the following code registers a CoderFactory that produces appropriate SerializableCoder instances:
pipeline.getCoderRegistry().registerCoder(CustomType.class, new CoderFactory() {
#Override
public Coder<?> create(List<? extends Coder<?>>) {
// No matter what the T is, return SerializableCoder
return SerializableCoder.of(CustomType.class);
}
#Override
public List<Object> getInstanceComponents(Object value) {
// Return the T inside your CustomType<T> to enable coder inference for Create
return Collections.singletonList(((CustomType<Object>) value).field);
}
});
Now, whenever you use CustomType in your pipeline, the coder registry will produce a SerializableCoder.
Note that SerializableCoder is not deterministic (the bytes of encoded objects are not necessarily equal for objects that are equals()) so values encoded using this coder cannot be used as keys in a GroupByKey operation.
I am connecting to a REST service. It's QuickBlox, to be specific, but it should not matter other than that the REST API is defined like a SQL query (but notably minus the joins) and I can do CRUD on it.
QuickBlox provides iOS SDK, which gives me an interface to construct the REST query with a callback.
In my program I want objects like User, UserResourceTable, Card, for example. (It's not exactly like this as User resides in a special QuickBlox module, but for the sake of the question it's OK to ignore this.)
I have constructed something I think is similar to the ActiveRecord pattern (I hope). So in my User class I have CRUD methods, and Card class also. And these CRUD have callbacks (or rather, Objective-C blocks which are closures).
Originally I have "read" as class method (like a Java static method), since before calling read there is nothing about the object that's read so I thought I would provide a static factory method on the aforementioned classes.
UserResourceTable is not directly expose to whoever uses the User class. It's created when User is created and read when User is read.
Now this all went OK until I started to think about unit-testing. I end up deciding to do dependency injection (DI), then I want UserResourceTable to be injected into User. But it now seems difficult to do DI while read is a class method/static method and I now think I want read to be an instance method. (Especially there is no static variable in Objective-C and even if I can use static variable in the file to kind of use it I wouldn't know when to inject it. It just smell bad to me.)
Now read would be something like this (pseudo code):
User user = User(); // user just so we can call read()
User userFetched = user.read();
A few questions:
Is it a good idea to have read as an instance method?
Is ActiveRecord suitable for my situation? Perhaps a better model?
Right now the User, UserResourceTable, and Card all directly uses QuickBlox sdk (or rather, via a thin wrapper). And I intend to inject UserResourceTable into User. I also like to inject Quickblox's interface (or the thin wrapper) into User, UserResourceTable, and Card. I would have some factories to do this. Does that sound any good?
I have a simple question regarding accessing member variables of a model object.
I have the following model objects:
#Entity
public class Person extends Model{
#Id
public Long id;
public String name;
}
#Entity
public class Account extends Model{
#Id
public String email;
public String password;
#OneToOne
public Person person;
}
So far so good, Any given person can have a single account. The Account object is copied from the zentask example. After authentication I redirect to the index page which displays the user realname as stated in the Person.name member variable. The Account object is inserted in the page just as with the zentasks example like so:
Account.find.byId(Controller.request().username());
Now the following strange things happen in the template which i do not understand:
#account.person.name
results in a Null value inserted in the template while calling:
#account.person.getName() or #account.person.getName
results as expected with the correct name inserted from the person object.
#account.person
shows the .toString() of the person object, also correctly showing the name.
So to summarize: What is wrong with the code above? Why can I call the account.person value without any problems, but when I call account.person.name this does not work anymore
Thank you in advance!
Richard
This is because JPA uses Aspects to intercept getter usage and fill-in the missing data from objects that are lazy-loaded. I don't know what conventional thinking is, but I would not use public members ever with JPA for this reason, it will break the framework consistently.
If you really want to use public members, you'll have to mark relationships as eager fetching:
#OneToMany(fetch=FetchType.EAGER)
or explicitly fetch all of the object tree you'll need in your template (ugh).
In your case, the relationship, a OneToOne is defined on the other side of the relationship, if you define it on the Account side, it should fetch eager by default. I forget if you can define OneToOne on both entities, I think you can, but you might have to fiddle with it a bit.
Overall, don't use public members with JPA, it will break. Better yet, ditch JPA and use Anorm instead, it maps to the problem domain much more successfully than JPA. Issues like this consistently cause JPA implementations to take up twice as much implementation time as anyone seems able to predict.
I just stumbled upon an answer posted by Guillaume Bort, which explains things.
Read here:
https://groups.google.com/d/topic/play-framework/CNjH3w_yF6E/discussion
Hope this helps!
Because of lazy loading the values in the field only get loaded when you access them from the class itself.(something that, in normal circumstances would use a setter/getter
In order to load the values you ether have to write getters and setters.
Or you can create a methode that checks every value.
you can add the following methode to your Account Entity:
public void checker(){
if(email==null){}
if(password==null){}
if(person==null){}
}
this will load every value, but won't reduce performance
suppose I have several OrderProcessors, each of them handles an order a little differently.
The decision about which OrderProcessor to use is done according to the properties of the Order object, and is done by a factory method, like so:
public IOrderProcessor CreateOrderProcessor(IOrdersRepository repository, Order order, DiscountPercentages discountPercentages)
{
if (order.Amount > 5 && order.Unit.Price < 8)
{
return new DiscountOrderProcessor(repository, order, discountPercentages.FullDiscountPercentage);
}
if (order.Amount < 5)
{
// Offer a more modest discount
return new DiscountOrderProcessor(repository, order, discountPercentages.ModestDiscountPercentage);
}
return new OutrageousPriceOrderProcessor(repository, order);
}
Now, my problem is that I want to verify that the returned OrderProcessor has received the correct parameters (for example- the correct discount percentage).
However, those properties are not public on the OrderProcessor entities.
How would you suggest I handle this scenario?
The only solution I was able to come up with is making the discount percentage property of the OrderProcessors public, but it seems like an overkill to do that just for the purpose of unit testing...
One way around this is to change the fields you want to test to internal instead of private and then set the project's internals visible to the testing project. You can read about this here: http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx
You would do something like this in your AssemblyInfo.cs file:
[assembly:InternalsVisibleTo("Orders.Tests")]
Although you could argue that your unit tests should not necessarily care about the private fields of you class. Maybe it's better to pass in the values to the factory method and write unit tests for the expected result when some method (assuming Calculate() or something similar) is called on the interface.
Or another approach would be to unit test the concrete types (DiscountOrderProcessor, etc.) and confirm their return values from the public methods/properties. Then write unit tests for the factory method that it correctly returns the correct type of interface implementation.
These are the approaches I usually take when writing similar code, however there are many different ways to tackle a problem like this. I would recommend figuring out where you would get the most value in unit tests and write according to that.
If discount percentage is not public, then it's not part of the IOrderProcessor contract and therefore doesn't need to be verified. Just have a set of unit tests for the DiscountOrderProcessor to verify it's properly computing your discounts based on the discount percent passed in via the constructor.
You have a couple of choices as I see it. you could create specializations of DiscountOrderProcessor :
public class FullDiscountOrderProcessor : DiscountOrderProcessor
{
public FullDiscountOrderProcessor(IOrdersRepository repository, Order order):base(repository,order,discountPercentages.FullDiscountPercentage)
{}
}
public class ModestDiscountOrderProcessor : DiscountOrderProcessor
{
public ModestDiscountOrderProcessor (IOrdersRepository repository, Order order):base(repository,order,discountPercentages.ModestDiscountPercentage)
{}
}
and check for the correct type returned.
you could pass in a factory for creating the DiscountOrderProcessor which just takes an amount, then you could check this was called with the correct params.
You could provide a virtual method to create the DiscountOrderProcessor and check that is called with the correct params.
I quite like the first option personally, but all of these approaches suffer from the same problem that in the end you can't check the actual value and so someone could change your discount amounts and you wouldn't know. Even wioth the first approach you'd end up not being able to test what the value applied to FullDiscountOrderProcessor was.
You need to have someway to check the actual values which leaves you with:
you could make the properties public (or internal - using InternalsVisibleTo) so you can interrogate them.
you could take the returned object and check that it correctly applies the discount to some object which you pass in to it.
Personally I'd go for making the properties internal, but it depends on how the objects interact and if passing a mock object in to the discount order processor and verifying that it is acted on correctly is simple then this might be a better solution.
I'm new to the Repository Pattern and after doing a lot of reading on the web I have a rough understanding of what is going on, but there seems to be a conflict of ideas.
One is what the IRepository should return.
I would like to deal in ONLY Pocos so I would have an IRepository implementation for every aggregate root, like so:
public class OrangeRepository: IOrangeRepository
{
public Orange GetOrange(IOrangeCriteria criteria);
}
where IOrangeCriteria takes a number of arguments specific to finding an Orange.
The other thing I have is a number of data back-ends - this is why I got into this pattern in the first place. I imagine I will have an implementation for each, e.g
OrangeRepositoryOracle, OrangeRepositorySQL, OrangeRepositoryMock etc
I would like to keep it open so that I could use EF or NHibernate - again if my IOrangeRepository deals in POCOs then I would encapsulate this within the Repository itself, by implementing a OrangeRepositoryNHibernate etc.
Am I on the right lines?
Thanks
EDIT: Thanks for the feedback, I don't have anyone else to bounce these ideas off at the moment so it is appreciated!
Yes, your version is the safest / most compatible one. You can still use it with about any resources, not only data access ones, but with web services, files, whatever.
Note that with the IQueryable version you still get to work based on your POCOs classes, but you are tied to the IQueryable. Also consider that you could be having code that uses the IQueryable and then turns out it you hit a case where one of the repository's ORM doesn't handle it well.
I use the same pattern as you do. I like it a lot. You can get your data from any resources.
But the advantage of using IQuerable is that you do not have to code your own criteria API like the OrangeCriteria.
When NHibernate gets full Linq support then I may switch to the IQueryable.
Then you get
public class OrangeRepository: IOrangeRepository {
public IQueryable<Orange> GetOranges();
}