Speed up ionic 2 live reload process - ionic2

Is there anyway to speed up the live reload process in ionic 2 after a save. Every save takes up to 10 to 15 seconds to show the changes, which is close to unbearable. Surely it should not take that long?

Related

FFMPEG and AWS: What's the most efficient way to handle this?

I'm new to AWS and I originally built the FFmpeg functions on my Node.JS API. But I realized this is the wrong way to do it in a real-world app, and that you need to use separate Lambda functions in AWS that handle the video editing separately from the main server.
I'm mainly a front-end developer but I'm open to learning new things.
I basically have the following process in my app:
User uploads video.
I need to take that video and add a watermark to it.
I then need a copy of the watermarked video in a smaller resolution.
I then need a 6 seconds GIF of the smaller resolution video.
Finally, I need to upload the 3 edited files (2 .mp4's and 1 .gif) to S3, and remove the original, non-watermarked video.
Here are my questions to be clear:
Should I upload the original file to S3 or to the server? And why?
Is the process above doable in a single Lambda function? Or do I need more Lambda functions?
How would you handle this problem, personally?
I originally built it by chaining one function to the next with promises, but AWS seems like a different world of doing things and the way I originally built it would not work.
Thanks a lot.
Update
Here are some tests I did with a couple videos:
Test 1
Test 2
Test 3
Test 4
Test 5
Original video resolution
1080p
1080p
1080p
1080p
480p
Original video duration
23 minutes
15 minutes
11 minutes
3.5 minutes
5 minutes
Step 1 duration (Watermarking original video)
30 minutes
18 minutes
14 minutes
4 minutes
2 minutes
Step 2 duration (Watermarking lower resolution)
5 minutes
3 minutes
3 minutes
1 minute
skip (already low res)
Step 3 duration (6 seconds GIF creation)
negligible (15 seconds)
negligible (10 seconds)
negligible (7 seconds)
negligible
negligible
Total
~35 minutes
~21 minutes
~17 minutes
~5 minutes
~2 minutes

Performance testing using Ultimate thread group

I want to use ultimate thread group for my test with 2100 users concurrency and synchronising timer with number of simulated users to group by 100.
Here I want to configure the thread group for 10 mins.
I am not sure how to distribute it across initial delay ,start up time, hold load and shut down time
We cannot suggest anything meaningful because we don't know what is your desired load pattern.
Normally people configure threads arrival/leaving so it would be:
Ramp-up phase - so the load would increase gradually, it will allow you to correlate increasing load with the changing metrics like response time, transactions per second, errors per second, etc.
"Plateau" phase - check how does system behave under constant sustained load
Ramp-down phase - it will allow to check whether system gets back to normal when the load decreases
If you don't have better ideas - go for 33% for ramp-up, plateau and ramp-down, in your case it will be easier to take 3 minutes for ramp-up and ramp-down and 4 minutes for the time to holds the load.
The relevant Ultimate Thread Group configuration:
With regards to the Synchronizing Timer, what it will do is to act as a rendezvous point for all Samplers in it's scope so given ramp-up of 180 seconds for 2100 users it means that 11.6 users will arrive every second so first request will be executed on 8th second of your test with 100 users then requests will be executed one by one each with 100 users in form of "spikes"

Concurrency context switching of 2+ youtube videos on a dual core machine

From what I know, concurrency involves context switching if the number of software threads exceeds the number of physical cores. So, for example, if there's 4 software threads running on 1 physical core, then each software thread would take turns running, and no more than 1 software thread can be making progress simultaneously/in parallel.
I'm trying to apply this idea to youtube videos on my macbook pro. I have 2 cores, and I started 4 youtube videos (which I assume is basically 4 software threads) within a second of each other and played each to the 1 minute mark. Since I have 2 cores, I was under the impression that a maximum of 2 youtube videos can be making progress simultaneously. My eyes and ears perceived that all 4 videos were making progress in parallel and it didn't sound or look like any video was being paused, but I assumed this was because the context switching is occurring at a frequency that is not detectable by my senses.
So I then I set a timer to see how long it takes the 4 videos to get to the 1 minute mark when I start all 4 at approximately the same time (within 1 second of each other.) The 4 videos took approximately 1minute TOTAL to get to the 1 minute mark. I'm confused why it doesn't take at least 2 minutes when I only have 2 cores, so a maximum of 2 videos can be making progress simultaneously (and this is the best case scenario assuming my computer wasn't doing anything else but playing the youtube videos).
It appears that I'm misunderstanding something about concurrency/context switching because I don't see how it all 4 videos can get to the 1 minute mark in a minute. Could someone explain?

Profiling Python on large dataset

I have a dataset with 3 Mio lines to process. Processing functions are cythonized. When I do the entire processing on a small subsample of 10000 lines, processing time is about 1,5 minute, a subsample of 30000 lines gives a processing time of 3 min. However, when I process the whole dataset after 10 hours only 1/4th of the dataset is processed, although I expect a processing time of max. 5 hours. I'm running Ubuntu 14.04 64 Bit and Anaconda 64 bit. RAM usage is at 50%. I deactivated directing to login after a period of inactivity, performance stayed the same. Switching of the screen after inactivity didn't influence execution time eighter. What else could be the reason for this unexpectedly slow execution?

Is it possible to stagger builds in Hudson/Jenkins?

I have Jenkins set up to build XBMC images for different platforms. My system takes around 6 hours to build each image, so I prefer to run them in parallel, usually 2 or 3 at a time. The problem with this is, that if they have to download updates to modules (like linux kernel or sometihng), the 2 or 3 building in parallel will download at the same time, corrupting the download (they point to the same folder)
Is it possible in jenkins/hudson to specify an offset? (I know you can schedule builds, as well as use a trigger that builds after completion of one project) something like:
Build 1: immediately
Build 2: start 20 minutes after build 1
Build 3: start 20 minutes after build 2
I tried looking for a plugin as well as google but no luck. I also know that I could schedule via the cron-like schedule capabilities in jenkins, but I have my build trigger set up to poll the GIT repo to look for changes for a build, I'm not just blind scheduling.
One way to do it is to choose the "Quiet Period" option under "Advanced".
Set it to 1200 seconds for Job 2, and 2400 seconds for Job 3.
That means Job 1 will be queued immediately when a change is noticed in git, Job 2 will go into the queue with a 20 minute delay, and Job 3 with a 40 minute delay.
Another way to do this would be to make the job some sort of a build flow (whether with the build flow plugin or by saying that the last task of job A is to run job B). If you can turn the download into its own job, then you can define the "download" job as single-threaded, and the rest as multithreaded.
Doing this serializes only what needs to be serialized. Doing an "every twenty minutes" thing will waste time when it takes fifteen minutes to download, and will fail (possibly in a hard-to-debug way) when there's a slowdown and it takes twenty-five minutes to download.