I need to pass a custom object from one fragment to another fragment. I use androidx navigation for navigation between fragments. I had to use deepLink for navigation for my use case.
The navigation graph with the destination fragment has,
<fragment
android:id="#+id/ListFragment"
android:name="com.joseph.learning.ListFragment"
android:label="ListFragment">
<argument android:name="todoItem"
app:argType="com.joseph.learning.models.todoItem" />
<deepLink app:uri="android-app://androidx.navigation/todoList/{todoItem}" />
</fragment>
And from the source fragment, the navigation is initiated like
findNavController().navigate(
Uri.parse("android-app://androidx.navigation/todoList/$item"),
NavOptions.Builder()
.setEnterAnim(R.anim.transition_slide_in_right)
.setExitAnim(R.anim.transition_slide_out_left)
.setPopExitAnim(R.anim.transition_slide_out_right)
.setPopEnterAnim(R.anim.transition_slide_in_left)
.build()
)
But this fails as soon as it executes with below error
java.lang.UnsupportedOperationException: Parcelables don't support default values.
at androidx.navigation.NavType$ParcelableType.parseValue(NavType.java:679)
at androidx.navigation.NavType.parseAndPut(NavType.java:96)
at androidx.navigation.NavDeepLink.parseArgument(NavDeepLink.java:306)
...
If incase, the custom object as argument is replaced with string or integer, then the navigation works without any issues. Also the passed data can be extracted in the destination fragment using navArgs()
What should be done to pass custom object across fragments ?
Android Documentation
Routes, deep links, and URIs with their arguments can be parsed from strings. This is not possible using custom data types such as Parcelables and Serializables as seen in the above table.
Solution: To pass around custom complex data, store the data elsewhere such as a ViewModel or database and only pass an identifier while navigating; then retrieve the data in the new location after navigation has concluded.
Related
I've applied the guidance on programmatic usage of M2Doc (also with this help) to successfully generate a document via the API, which was previously prepared by using the M2Doc GUI (configured .docx plus a .genconf file). It seems to also work with a configured .docx, but without a .genconf file.
Now I would like to go a step further and ease the user interface in our application. The user should come with a .docx, include the {m:...} fields there, especially for variable definition, and then in our Eclipse application just assign model elements to the list of variables. Finally press "generate". The rest I would like to handle via the M2Doc API:
Get list of variables from the .docx
Tell M2Doc the variable objects (and their types and other required information, if that is separately necessary)
Provide M2Doc with sufficient information to handle AQL expressions like projectmodel::PJDiagram.allInstances() in the Word fields
I tried to analyse the M2Doc source code for this, but have some questions to achieve the goal:
The parse/generate API does not create any config information into the .docx or .genconf files, right? What would be the API to at least generate the .docx config information?
The source code mentions "if you are using a Generation" - what is meant with that? The use of a .genconf file (which seems to be optional for the generate API)?
Where can I get the list of variables from, which M2Doc found in a .docx (during parse?), so that I can present it to the user for Object (Model Element) assignment?
Do I have to tell M2Doc the types of the variables, and in which resource file they are located, besides handing over the variable objects? My guess is no, as using a blank .docx file without any M2Doc information stored also worked for the variables themselves (not for any additional AQL expressions using other types, or .oclAsType() type castings).
How can I provide M2Doc with the types information for the AQL expressions mentioned above, which I normally tell it via the nsURI configuration? I handed over the complete resourceSet of my application, but that doesn't seem to be enough.
Any help would be very much appreciated!
To give you an impression of my code so far, see below - note that it's actually Javascript instead of Java, as our application has a built-in JS-Java interface.
//=================== PARSING OF THE DOCUMENT ==============================
var templateURIString = "file:///.../templateReqs.docx";
var templateURI = URI.createURI(templateURIString);
// canNOT be empty, as we get nullpointer exceptions otherwise
var options = {"TemplateURI":templateURIString};
var exceptions = new java.util.ArrayList();
var resourceSetForModels = ...; //here our application's resource set for the whole model is used, instead of M2Doc "createResourceSetForModels" - works for the moment, but not sure if some services linking is not working
var queryEnvironment = m2doc.M2DocUtils.getQueryEnvironment(resourceSetForModels, templateURI, options);
var classProvider = m2doc.M2DocPlugin.getClassProvider();
// empty Monitor for the moment
var monitor = new BasicMonitor();
var template = m2doc.M2DocUtils.parse(resourceSetForModels.getURIConverter(), templateURI, queryEnvironment, classProvider, monitor);
// =================== GENERATION OF THE DOCUMENT ==============================
var outputURIString = "file:///.../templateReqs.autogenerated.docx";
var outputURI = URI.createURI(outputURIString);
variables["myVar1"] = ...; // assigment of objects...
m2doc.M2DocUtils.generate(template, queryEnvironment, variables, resourceSetForModels, outputURI, monitor);
Thanks!
No the API used to parse an generate don't modifies the template file nor the .genconf file. To modify the configuration of the template you will need to use the
TemplateCustomProperties class. That will allow you to register your metamodels and service classes. This instormation is then used to configure the IQueryEnvironment, so you might also want to directly configure the IQueryEnvironment in your code.
The generation in this context referes to the .genconf file. Note The genconf file is also an EMF model, so you can also craft one in memory to launch you generation if it's easier for you. But yes the use of a .genconf file is optional like in your code example.
To the list of variables in the template you can use the class TemplateCustomProperties:
TemplateCustomProperties.getVariables() will list the variables that are declared with their type
TemplateCustomProperties.getMissingVariables() to list varaibles that are used in the template but not declared
You can also find le list of used metamodels (EPackage nsURIs) and imported services classes.
The type of variables is not needed at generation time, it's only needed if you want to validate your template. At generation time you need to pass a map from the variable name to its value as you did in your example. The value of a variable can be a any object from your model (an EObject), a String, an Integer, ... If you want to use something like oclIsKindOf(pkg::MyEClass) you will need to register the nsURI of pkg first see the next point.
The code you provided should let you use something like projectmodel::PJDiagram.allInstances(). This service needs a ResourceSetRootEObjectProvider() that is initialized in M2DocUtils.getQueryEnvironment(). But you need to declare the nsURI of your metamodel in your template (see TemplateCustomProperties). This will register it in the IQueryEnvironment. You can also register it yourself using IQueryEnvironment.registerEPackage().
This should help you finding the missing parts in the configuration of the AQL environment. Your code seems good and should work when you add the configuration part.
In Glimmer.js, what is the best way to reset a tracked property to an initial value without using the constructor?
Note: Cannot use the constructor because it is only called once on initial page render and never called again on subsequent page clicks.
There are two parts to this answer, but the common theme between them is that they emphasize switching from an imperative style (explicitly setting values in a lifecycle hook) to a declarative style (using true one-way data flow and/or using decorators to clearly indicate where you’re doing some kind of transformation of local state based on arguments).
Are you sure you need to do that? A lot of times people think they do and they should actually just restructure their data flow. For example, much of the time in Ember Classic, people reached for a pattern of "forking" data using hooks like didInsertElement or didReceiveAttrs. In Glimmer components (whether in Ember Octane or in standalone Glimmer.js), it's idiomatic instead to simply manage your updates in the owner of the data: really doing data-down-actions-up.
Occasionally, it does actually make sense to create local copies of tracked data in a component—for example, when you want to have a clean separation between data coming from your API and the way you handle data in a form (because user interfaces are API boundaries!). In those scenarios, the #localCopy and #trackedReset decorators from tracked-toolbox are great solutions.
#localCopy does roughly what its name suggests. It creates a local copy of data passed in via arguments, which you can change locally via actions, but which also switches back to the argument if the argument value changes.
#trackedReset creates some local state which resets when an argument updates. Unlike #localCopy, the state is not a copy of the argument, it just needs to reset when the argument updates.
With either of these approaches, you end up with a much more “declarative” data flow than in the old Ember Classic approach: “forking” the data is done via decorators (approach 2), and much of the time you don’t end up forking it at all because you just push the changes back up to the owner of the original data (1).
I have what I believe to be common but complicated problem to model. I've got a product configurator that has a series of buttons. Every time the user clicks on a button (corresponding to a change in the product configuration), the url will change, essentially creating a bookmarkable state to that configuration. The big caveat: I do not get to know what configuration options or values are until after app initialization.
I'm modeling this using EmberCLI. After much research, I don't think it's a wise idea to try to fold these directly into the path component, and I'm looking into using the new Ember query string additions. That should work for allowing bookmarkability, but I still have the problem of not knowing what those query parameters are until after initialization.
What I need is a way to allow my Ember app to query the server initially for a list of parameters it should accept. On the link above, the documentation uses the parameter 'filteredArticles' for a computed property. Within the associated function, they've hard-coded the value that the computed property should filter by. Is it a good idea to try to extend this somehow to be generalizable, with arguments? Can I even add query parameters on the fly? I was hoping for an assessment of the validity of this approach before I get stuck down the rabbit hole with it.
I dealt with a similar issue when generating a preview popup of a user's changes. The previewed model had a dynamic set of properties that could not be predetermined. The solution I came up with was to base64 encode a set of data and use that as the query param.
Your url would have something like this ?filter=ICLkvaDlpb0iLAogICJtc2dfa3
The query param is bound to a 2-way computed that takes in a base64 string and outputs a json obj,
JSON.parse(atob(serializedPreview));
as well as doing the reverse: take in a json obj and output a base64 string.
serializedPreview = btoa(JSON.stringify(filterParams));
You'll need some logic to prevent empty json objects from being serialized. In that case, you should just set the query param as null, and remove it from your url.
Using this pattern, you can store just about anything you want in your query params and still have the url as shareable. However, the downside is that your url's query params are obfuscated from your users, but I imagine that most users don't really read/edit query params by hand.
I need to be able to assign a string returned by a helper to a components attribute.
Here's what I got that isn't working:
{{nav-title text=(translate user.likes name=user.profile.name)}}
It tries to find translate on the controller (I guess) and throws the following error:
Handlebars error: Could not find property 'translate' on object (generated users.user.likes controller).
Don't think you can do that. I'd say either make a new property on whatever context that nav-title is being rendered in and do the logic there, or make a new component and move the logic inside.
It's ok to use components for domain-specific stuff. For example this could be a user-likes-nav component, which knows how to translate a user's likes before rendering the template.
I want to render a HTML entity as value into an input field, e.g. m².
For simple demonstration purposes I have tried it out with a view representing the input field, but the standard input helper has the same behavior. The initial rendering works fine, but if the bound value is updated, then suddenly the superscript value is escaped. Thus m² changes to ³, it should be m³.
Here you can see the code in action:
http://emberjs.jsbin.com/vifup/3/edit
I find it strange that the set call in init works fine, but the update on click does not work.
Is this a bug and are there any workarounds?
I found the answer myself. I think the issue arises when setting the value via jQuery. jQuery seems to escape the HTML-Entities before setting them using $.val(). Ember seems to use $.val() in the background.
The simple yet not very nice solution I found is explained here:
http://www.objectpartners.com/2012/07/10/dynamic-html-entities-in-form-inputs/
In short, set the new value containing HTML-Entities to a temporary DOM node as html and retrieve this html again using $.html(). Then set it for the input using $.val()
var new_value = $('<div></div>').html('m²').html();
$('input').val(new_value);