What is the cause of Storage data disappearance? - ionic2

Storage in the application which was used normally so far, all the data has disappeared in the update. There is no change in # ionic / storage version or ionic version.
Also, the data created after the data disappears is newly saved normally.
What are the conditions under which Storaage data is initialized?
"#ionic/storage": "2.0.0"
--- Ionic Info ---
Ionic:
ionic (Ionic CLI) : 4.0.1 (C:\Users\xxxxx\AppData\Roaming\npm\node_modules\ionic)
Ionic Framework : ionic-angular 3.9.2
#ionic/app-scripts : 3.1.11
Cordova:
cordova (Cordova CLI) : not installed
Cordova Platforms : android 6.3.0, ios 4.5.4
System:
Android SDK Tools : 26.1.1
NodeJS : v8.11.3 (C:\Program Files\nodejs\node.exe)
npm : 6.2.0
OS : Windows 7
Environment:
ANDROID_HOME : C:\Users\xxxxx\AppData\Local\Android\Sdk

Ionic Storage is a "wrapper" of localForage library which wraps different persistence solutions under the hood (via "drivers"). The only "guaranteed" persistence is SQLite if ionic storage runs on a device as hybrid (cordova app). The other browser based drivers (indexeddb or websql) persist data according to particular browser allowance. And such browser based persistence is not truly "guaranteed" since it is subject to:
browser quotation (how much disk space browser allows a web site to
store for)
browser mode (like privacy mode that can block localStorage etc)
browser need to use space for another site (and it can start removing data from your app for instance)
So overall unless you use SQLite - treat this Ionic Storage as persistent cache of some sort...
Also keep in mind that Ionic Storage can use one type of storage available to it at one point and then switch to another if conditions changed thus making your data still available in websql but not accessible since your app could have switched to indexeddb. To avoid that its best to strictly control available drivers and their order of preference / initialization

Related

Same App Appears Twice in Apple Health Sources since migrating to SwiftUI App Lifecycle

I recently updated an Apple Watch App from the app + extension lifecycle to the SwiftUI lifecycle.
Or to put it another way, the bundle Ids have changed so that:
Before
com.myapp
com.myapp.watchkitapp
com.myapp.watchkitapp.extension
After
com.myapp.paddlelogger
com.myapp.watchkitapp
For me everything works great, but we have multiple reports of people seeing two versions of the app in the Apple Health Sources
This means there are two "sources" of data and two sets of permissions. In the past we just had one set of permissions.
It also means we have trouble reading data on the iPhone app that was recorded on the watch app.
HKSource.default().bundleIdentifier is
com.myapp on iPhone and
com.myapp.watchkitapp on Apple Watch
That must be part of the issue(?).
Is this something I've done wrong? I can't find any docs on migrating from from the legacy Extension style to the new SwiftUI Lifecycle
Workaround: When reading Apple Health data in the app I now have to check for both bundle ids to distinguish data from my app vs that from 3rd party apps.
From speaking to other developers, this is not the only app that appears twice in Health, so I am assuming this is an Apple issue.

Can I launch protractor mobile test on the AWS Device Farm

Can I run my e2e test developed using Protractor on the AWS device farm?
Because I want to complete mobile testing of my project using the AWS device farm, and do not really understand can I do that or not. I found 3 types about that on the AWS forum, but it is too old from 2018.
First forum discussion
Second forum discussion
Third forum discussion
Maybe something changed?
I have protractor e2e tests written for the desktop browser and want to use those ones for the mobile browser too.
I will answer this for both mobile browsers and desktop testing.
Mobile Browsers
AWS Device Farm has 2 execution modes: Standard Mode and Custom Mode.
Standard mode gives you granular reporting if you don't generate a report for your tests locally. This splits up the artifacts for each test.
Custom mode gives you as close as possible execution state and results as you would get locally. It does not give you the granular reporting which is fine for most as you already get reports locally which will be available on Device Farm as well. It is recommended for customers to use custom mode as that is the one that is most up to date and adds supports for latest frameworks unless of course they absolutely need granular reporting.
Protractor on Device Farm
It is not officially support today.
However, Device Farm supports Appium Nodejs in custom mode. You get a yaml file where you can run shell commands on the host machine where the tests will be executed. So in case of protractor you could select this test type (Appium Nodejs), install the missing dependencies needed for the tests, start your server, and run your tests.
The points to evaluate: Since Device Farm takes your tests as inputs, you will have to upload the zip file of your tests. I would highly recommend checking the instructions for nodejs tests and using the same. Alternatively, you can also download your tests on the fly using the yaml file.
Desktop Browsers
Device Farm has a selenium grid that you can connect to from your local machine and run your tests. The browsers Chrome and Firefox run on Windows platform and Safari is not supported today. If you use a selenium grid on your local machine for your tests, then you most likely should be able to run the same tests using the Selenium grid on Device Farm. Of course, pending validation.
If you need more help on any of these items feel free to reach out to aws-devicefarm-support#amazon.com and I can help you further.
You can test in chrome with an emulated mobile mode:
You can add "mobileEmulation" in a new protractor.conf-mobile.js
chromeOptions: {
args: ['--disable-infobars', '--headless', '--disable-gpu', '--window-size=1920,1080'],
'mobileEmulation' : { 'deviceName': 'Galaxy S5' },

Sitecore 8.1 xDB data capture requirements

We have just upgraded our 7.2 platform to 8.1. We have enabled xDB as well.
I've following questions:
Do we need to write any custom code (JS or C#code) to capture analytics data on to xDB?
What sort of data is captured by default and what sort of data requires custom code?
Thanks.
1) No custom code is required by default. You just need to make sure that configuration files are properly setup. Sitecore Analytics and xDB features are enabled when you install Sitecore. In Sitecore 8.0 you only need to have "Analytics.Enabled" set on "true" in Sitecore.Analytics.config but in Sitecore 8.1 because they have introduced the notion of separation of xDB and core sitecore functionality you also need to have the extra license for xDB and having "Xdb.enabled" in Sitecore.Xdb.config as well. Also make sure that you have an installed and running MongoDB on your machine since xDB is actually consisted of MongoDB and SQL server (both)
Also have a look on following links about CMS-only mode in Sitecore 8.1:
CMS-only mode: an overview
Sitecore 8.1: what does new CMS-only mode mean
2) Sitecore xDB collects visitors' information in "Contacts" collection on MongoDB and the actual visits in "Interaction" collection on MongoDB (in JSON format) and then it processes raw data to generate statistics and store them into SQL server (separate database for analytics). In general, Sitecore shows you various statistics based on "PageViews" and "Engagement Values" side by side on dozens of charts. Checkout "ReportDataView" and "TrafficOverview" views on SQL server (once you setup xDB up and running) to have some ideas about what is it doing.
Anyway, in many cases you may find the ready-to-use charts and graphs are not enough so you can also have direct access to raw data in MongoDB or aggregated counterpart in SQL server and you can also write your extra pieces of info on each page so that you can extract them later on Experience Analytics.

How to use Java Mission Control (or other solutions) with ColdFusion (tomcat)

The ColdFusion monitor is great for details about the server itself but it is pretty limited when it comes to the JVM.
How can one implement Java Mission Control or similar JVM monitoring solution to monitor the JVM running ColdFusion while you are developing and testing performance / memory footprint of applications and features?
Note that I am asking this question for "community knowledge" and already know the answer, but feel free to contribute any tidbits about other monitoring solutions.
Mission control used to be bundled as as it's own utility application in the JRockit JDK. Hotspot and JRockit were two entirely different JVMs with their own JDK/JREs. By default, ColdFusion uses the HotSpot JVM. JRockit is basically defunct from new development with some of its features being merged into HotSpot.
Java Mission Control is free for development purposes.
Get started, download the the latest 1.8 JDK. My preference is to uninstall all other 64-bit JDKs and JREs installed at this time.
This step might not be needed. Change your environment variable to update your JAVA home.
a. Right click "My Computer" -> "Properties" -> "Advanced" -> "Environment Variables"
b. Change JAVA_HOME and any other JAVA vars to your new path
Adjust your jvm.config
a. Make a .bak copy of C:\ColdFusion1x\cfusion\bin\jvm.config
b. Add the following lines to the jvm.config
-XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
-Dcom.sun.management.jmxremote.autodiscovery=true
-Dcom.sun.management.jdp.name=ColdFusion10
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=7091
-Dcom.sun.management.jmxremote.ssl=false
Open C:\program files\java\jdk1.8_**\lib\missioncontrol\configuration\org.eclipse.equinox.simpleconfigurator\bundles.info file with administrator privileges and remove the four lines that start with the following text:
org.eclipse.equinox.log.nl_ja
org.eclipse.equinox.log.nl_zh
org.eclipse.equinox.supplement.nl_ja
org.eclipse.equinox.supplement.nl_zh
Edit the C:\program files\java\jdk1.8_xx\lib\missioncontrol\configuration\config.ini, and add the following line: eclipse.home.location=$osgi.install.area$
Start up : C:\program files\java\jdk_1.8.0_**\bin\jmc
Note that JMC is launching from 1.8 while your ColdFusion instance is running with whatever the latest Hotspot version you have installed with your ColdFusion updater.
You can install plugins from the help -> install new software. The plugins site should already be there. This will give you full on memory analysis of a heap dump. It's not nearly as good as the JRockit memory analyzer, but it's better than nothing.
If you are running ColdFusion as a Windows service, you will need to open services.msc and shutdown your ColdFusion Application Server. Then open C:\ColdFusion10\cfusion\bin\cfstart.bat to fire up Tomcat and ColdFusion as a foreground application. The jOverflow plugin will not work when running as a windows service.
You will see your JVM appear in Java Mission Control, mine is call -Xdebug since I guess it has no name and starts with the first option.
Right click on your ColdFusion JVM and select "Start JMX console". You will see something that looks like this show up on the right:
There is a whole lot to explore, including a lot of junk when it comes to examining memory due to having to sift through the ColdFusion Framework itself, but there are a ton of tutorials for deciphering what it means.
This video is your primary introduction: https://www.youtube.com/watch?v=WMEpRUgp9Y4
References:
https://www.youtube.com/watch?v=WMEpRUgp9Y4
http://www.ghidinelli.com/2009/07/16/finding-memory-leaks-coldfusion-jvm
http://www.oracle.com/technetwork/java/javase/jmc53-release-notes-2157171.html (see "known issues" section)

Sitecore development and demo servers

I'm attempting to get an understanding of what is a best practice / recommended setup for moving information between multiple Sitecore installations. I have a copy of Sitecore setup on my machine for development. We need a copy of the system setup for demonstration to the client and for people to enter in content prelaunch. How should I set things up so I people can enter content / modify the demonstration version of the site and still allow me to continue development on my local machine and publish my updates without overwriting changes between the systems? Or is this not the correct approach for me to be taking?
I believe that the 'publishing target' feature is what I need to use, but as this is my first project working with Sitecore and so I am looking for practical experience on how to manage this workflow.
Nathan,
You didn't specify what version of Sitecore, but I will assume 6.01+
Leveraging publishing targets will allow you to 'publish' your development Sitecore tree (or sub-trees) from your development environment to the destination, such as your QA server. However, there is potential that you publish /sitecore/content/home/* and then you wipe out your production content!
Mark mentioned using "Sitecore Packages" to move your content (as well as templates, layout items, etc...) over, which is the traditional way of moving items between environments. Also, you didn't specify what version of Sitecore you are using, but the Staging Module is not needed for Sitecore 6.3+. The Staging Module was generally used to keep file systems in sync and to clear the cache of Content Delivery servers.
However, the one piece of the puzzle that is missing here is that, you will still need to update your code (.jpg, .css, .js, .dll, .etc) on the QA box.
The optimal solution would be to have your Sitecore items (templates, layout item, rendering items, and developer owned content items) in Source control right alongside your ASP.NET Web Application and any class library projects you may have. At a basic level, you can do this using built in "Serialization" features of Sitecore. Lars Nielsen wrote an article touching on this.
To take this to the next level, you would use a tool such as Team Development for Sitecore. This tool will allow you to easily bring your Sitecore items into Visual Studio and treat them as code. At this point you could setup automated builds, or continuous integration, so that your code and Sitecore items, are automatically pushed to your QA environment. There are also configuration options to handle the scenario of keeping production content in place while still deploying developer owned items.
I recommend you looks at the staging module if you need to publish to multiple targets from the same instance, i.e. publish content from one tree over a firewall to a development site, to a QA site, etc.
If you're just migrating content from one instance to another piecemeal, you can use Sitecore packages which are standard tools to move content. The packages serialize the content to XML and zip it up and allow you to install them in other instances.