Web worker not throws error after creation in sharepoint 2013 - sharepoint-2013

I can't understand whether there's something in sharepoint that that's causing
my web worker to throw an error as soon as its created In IE11 . The same worker runs great on chrome. Even tried a simple test:
the worker file :
self.addEventListener('message',function(e){ console.log("message"); })
and I'm creating the worker like so:
var worker = new Worker('http://{rootSite}/sites/53/Style%20Library/testworker.js')
worker.addEventListener('message',function(e){ console.log("message"); });
worker.addEventListener('error',function(e){ console.log("error"); });
It's strange because I tested the same script on a non sharepoint site and it worked on IE.
but on a sharepoint site as soon as create the test worker from the same site's doc library.. it throws an error.. with a null message !!
Please can anyone tell me what is going on here?!

Old question but I'd like to leave an answer for anyone who stumbles upon it.
Loading a Web worker via blob in IE has many limitations. Wouldn't recommend going down that route.
For some reason, Internet Explorer blocks Web workers from loading when the file is loaded from a sharepoint site.
Storing your Web worker files in the _layouts folder works well.

Okay ... So for all the share point developers who ever want to use web workers in their apps..
I still don't know why in internet explorer the web worker failed to load the external script
But apparently an inline web worker works !
So you could store you worker code in the doc library as a text file. And then get its content via ajax and then create an inline worker. You will need the window.URL object and the blob constructor :
First build a blob from the javascript code as a string:
Var string = "worker code ";
Var blob = new blob([ string] , {type:"text/javascript"});
Var worker = new Worker(URL.createUriObject(blob));

Related

PWA: how to refresh content every time the app is opened

I created a PWA app which sends API call to my domotic server and prints the response on the home page (e.g. outside temperature and vacuum robot status).
While all the data get refreshed at very first app opening, if I minimize the app whithout completely shutting it off I have no data refreshing at all.
I was wondering how to force a refresh every time the app gets re-opened without having to do it manually (no pull-down to refresh, no refresh-button).
Found myself the solution adding the following code in service worker:
self.addEventListener('visibilitychange', function() {
if (document.visibilityState === 'visible') {
console.log('APP resumed');
window.location.reload();
}
});
Here is the solution that works.
You can place this code wherever you have access to the window object:
window.addEventListener("visibilitychange", function () {
console.log("Visibility changed");
if (document.visibilityState === "visible") {
console.log("APP resumed");
window.location.reload();
}
});
Consider this may affect user experience or data loss with a forced reload every time the user swipes between apps.

CloudKit Dashboard: Deploy Schema to Production fails with "There was a problem loading the environment’s status"

Note: This is not new, but I have some new insights on it.
For about three weeks now I regularly try to deploy the development-schema of my CloudKit Container to production, using the CloudKit Dashboard:
It spins for exactly a minute to then tell me "There was a problem loading the environment's status"
This is not new, many other questions face this as well:
Error CloudKit Dashboard - There was a problem loading the environment's status
Does iCloud need to be in the Production environment in order to use in Production?
iCloud dashboard: Cannot deploy CloudKit schema to Production
Apple support told me to
look at https://developer.apple.com/forums/thread/656723 (try again after a day with stable network)
use Safari and resetting browser settings to clear cache and cookies
"You may also try creating a new CloudKit container, rebuilding your schema, and then try again." => obviously doesn't work, because users have data on production
TL;DR:
Kill the timeout by running this in the console:
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(the response is undefined — that's okay)
How I got there
So I started to look at the requests the site makes to the backend when I click "deploy". Chrome shows that the request to
https://p39-ckdatabasews.icloud.apple.com/r/v3/user/<container-name>/production/public/admin/deployment/status?team_id=<team-id>
is cancelled after 1.0 min.
Insight 1
The problem is with the production schema. I had used the Reset Development Environment before to make sure I hadn't messed that up myself, but this would have spared me that.
I used the Copy as cURL command (in Chrome, because it also copies the auth cookies, which Safari does not) and ran it in Terminal.
Interestingly, that does respond after 1'37 min. That's also what the X-Apple-Edge-Response-Time: 97244 header says.
If you know what to look for, the console will also tell you the the request timed out:
Insight 2
The server takes too long to respond (> 1min) and the client script times out (at 1 min)
Note: You can also get a response by right-clicking the request in Chrome and choosing "Replay XHR".
Solution
I tried to understand the JavaScript that sends the XHR request and modify the timeout, but I failed. However, you can apparently clear all timeouts that exist with
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(from https://stackoverflow.com/a/8860203)
Running that while waiting for the response actually worked for me!

multiple embedded process engine instances using shared database across multiple applications in camunda?

Hello every one I am trying my hands on camunda and would like to say so far the tool seems awesome, but one thing I can't figure out is what shall happen when I bootstrap a process engine (with the same name) across multiple different applications. such as imagine this code is written in many applications but the camunda database url is same in processes.xml which basically means same processes.xml is read for each process application.
// instantiate the process application
MyProcessApplication processApplication = new MyProcessApplication();
// deploy the process application
processApplication.deploy();
// interact with the process engine
ProcessEngine processEngine = BpmPlatform.getDefaultProcessEngine();
processEngine.getRuntimeService().startProcessInstanceByKey(...);
// undeploy the process application
processApplication.undeploy();
Where the class MyProcessApplication could look like this:
#ProcessApplication(
name="my-app",
deploymentDescriptors={"path/to/my/processes.xml"}
)
public class MyProcessApplication extends EmbeddedProcessApplication {
}
now if I instantiate a process instance via the repository service in one application will I be able to reference it another application if I query for it via the repository service? cause I think repository service is accessed via ProcessEngine and surely the process engine object in another application is different from the one that started the process instance? right? but the database is shared so will process instance be available? need help as I cant get my head around this may be i am missing some fundamental knowledge so please do enlighten me.

Zend framework ACL fails for the first time to switch the server

Hi guys!
I'm not native to English, so I'll appreciate if you correct my sentence!
To explain my issue, here is our development environment.
language : PHP7.3.11
framework : Zend framework v3.3.11
server : aws ec2×4
server OS : Amazonlinux 2
redis was enabled, there are two project like a-project/ec2×2(a-ec2) b-project/ec2×2(b-ec2)
the only differences between a-ec2 and b-ec2 are source code, the other setting like nginx, php-fpm, redis also DB setting are same.
if I lack some info, please let me know
When we joint these project, the problems happen.
After logged in our service, the zend works oddly.
the loginAction is on the a-ec2, we can successfully login with that.
And we save that session information on redis, and it works normally.
But only for the first time that we switch the server from a-ec2 to b-ec2, zend acl error has occur.
We use isAllowed function for checking the privilege for whether the user has enough privilege to access certain service.
The isAllowd function which located at line 827, /library/ZendAcl.php return false for the first time.
Then, we reload the page, the isAllowed function return true so that we can access to the service.
In detail, something went wrong around &_getRules function which is at line 1161, which is used _getRuleType function.
In those process, somehow one of the array contain "TYPE_DENY".
But when try to reload(ctrl + f5), that value turn into "TYPE_ALLOW".
How can this happen?
And how to fix this?
We are trying to figure this out like 2 weeks or more...
Thanks in advance!!
[update]
we found that this method which is written in under doesn't work well, so that we can't get $auth properly.
The $auth return "".
self::$_auth = $auth = Zend_Auth::getInstance()->getStorage()->read();
[update 2]
we might solve this issue, so I'll leave our solution.
we use
Zend_Auth::getInstance()->getStorage()->read()
to get $auth from session.
But for the first time, the session information which has saved at a-ec2 can't read at b-ec2.
So, we decided to use session information at redis, so we changed method to get $auth like
require_once 'Zend/Session.php';
Zend_Session::start();
self::$_auth = $auth = Common_Model_Redis::get();
start the session before connect to the redis server and just get session information form it!
I hope this will work you too!

One or more services have started or stopped unexpectedly SPTimerService (SPTimerV4)

I have stop and restart services(Sharepoint Administration & Sharepoint Timer Service)
I cleaned the Configuration Cache by using mentioned steps.
Summary of the steps to clear the timer job:
Stop SharePoint Timer service on all servers in the farm.
Browse to C:\ProgramData\Microsoft\SharePoint\Config{GUID} where the {GUID} folder contains a bunch of XML files and NOT the files with a “.PERSITEDFILE” extension.
Delete all the XML files
Update the contents of the Cache.ini file to just say “1” (without quotes).
Restart the SharePoint Timer service on each server
Reanalyze the issue in Health Analyzer
Does anyone know why this keeps occurring and how I can stop it?
First of all try and check your ULS Logs and see if there is any error that arise.
Secondly try and maybe check the event viewer on your SharePoint server to see if any errors are shown and make sure you have enough disk space available.
and also you might want to check this :Clearing Timer Services
Let me know if you see any error post it here.
hope it helps.
Yotam.