after effects timeline in compositions - after-effects

Im trying to edit my first video in after effects. When I include a video in compositions a green bar is included and gives me the chance to display only 15segs of video and then loops again the same part of the video.
everytime i try to change that information the bar is regenerated.
my main question is:
its possible to change that green bar in the right side of the image, to avoid the 15segs limit?
thanks!.

The time you can preview depends on your available RAM. The green line means "This is the part that is already rendered into your RAM". You can edit the amount of RAM available for After Effects under:
Preferences > Memory

try this solutions
1- use lower resolution (see the image 1) choose
*half (un demi) extend time limit 4 time
*third (un tiere) to extend it 9 time
*quarter (un quart) to extend it 16 time!
remember that this affect the quality but you can change it at any time
image 1
2- use lower frame rate (first case in image 2)
or ignore some frames (second case) make it equal to 1 or 2
remember again that this can have a stroboscopic effect to your video but you can
change it at any time
image 2

Related

Why does srcset resize image?

I'm having a weird behaviour using srcset and I'm having a hard time understanding it. I've done a CodePen: http://codepen.io/anon/pen/dYBvNM
The problem is that I have a set of images (that Shopify generates) of various sizes: 240px, 480px, 600px and 1024px. The problem is that those are the maximum sizes. This means that if a merchant uploads a smaller image (let's say 600px), the 1024px version will be 600px, not 1024px. I cannot know that in advance, so I'm forced to simply add all the sizes as a "best case":
<img
src="my_1024x1024.jpg"
srcset="my_240px.jpg 240w, my_480px.jpg 480w, my_600px.jpg 600w, my_1024px 1024w"
sizes="(max-width: 35em) 100vh, 610px"
>
The weirdness happen when the image is indeed smaller than the expected max size. When that the case, the browser correctly select the appropriate image (in this case, it would select the 1024 version on a 15' Retina), but as the image is actually smaller than 1024px (size that I've indicated), the browser is actually resizing the image to be smaller than its native resolution.
You can compare in the CodePen http://codepen.io/anon/pen/dYBvNM that those two images are the 1024px version, but in the one using srcset, the rendering is actually smaller than with src only. I would have expected that it would leave the image at its native resolution.
Could you please explain why does that?
THanks!
The way it works is that 'w' descriptors are calculated into 'x' descriptors by dividing the given value with the effective size from the sizes attribute. So for instance, if 1024w is picked and the size is 610px, then 1024/610 = 1.67868852459016x, and that is the pixel density of the image that the browser will apply. If the image is then not in fact 1024 pixels wide, the browser will still apply this same density, which will "shrink" the image, because that's the right thing to do in the valid case when the image width and 'w' descriptor match.
You have to make the descriptors match the resource width. When the user uploads an image, you can check its width and use that as the biggest descriptor in sizes (if it's smaller than 1024), and remove the descriptors that are bigger than the given image width.

Get the frame from Video by the time (openCV)

I have a video and I have important times in this video
For example:
"frameTime1": "00:00:01.00"
"frameTime2": "00:00:02.50"
"frameTime2": "00:00:03.99"
.
.
.
I get the FPS, and I get the totalFrameCount
If I want to get the frames in that's times for example the frame that's happen in this time "frameTime2": "00:00:02.50" I will do the following code
FrameIndex = (Time*FPS)/1000; //1000 Because 1 second = 100 milli second
In this case 00:00:02.50 = 2500 milli second, and the FPS = 29
So the FrameIndex in this case is 72.5, in this case I will choose either frameNO: 72 or 73, but I feel that's not accurate enough, any better solution?
What's the best and accurate way to do this?
The most accurate thing you have at your disposal is the frame time. When you say that an event occurred at 2500ms, where is this time coming from? Why is it not aligned with your framerate? You only have video data points at 2483ms and 2517ms, no way around that.
If you are tracking an object on the video, and you want its position at t=2500, then you can interpolate the position from the known data points. You can do that either by doing linear interpolation between the neighboring frames, or possibly by fitting a curve on the object trajectory and solving for the target time.
If you want to rebuild a complete frame at t=2500 then it's much more complicated and still an open problem.

opencv mat scan random time stealing

My application is C++ OpenCV based, which needs to detect an object in an image, by threshold filtering. I divided the image into small strips, because of performance reason. I only scan the areas I need to. The image is 2400x1800 pixles. The strips are 1000x50. The Image color space is HSV. As the desired object can be one of few colors (for example 8), I run the filter 8 times per strip. So, in the application, I run the filter a few tens of times.
The application is time critical.
For most of the runs, the strip filter takes <<1 millisecond.
The problem: Every few filters (can be between 10 to 40, depending on the strip size), the run takes 15 milliseconds (always the same 15 milliseconds)!
The total run which should run in 1-2 milliseconds, runs between 50 to 100 milliseconds, depending on how many times there was a 15 millisecond run.
The heart of the code which accesses the Mat and causes the steal of time looks like this:
for i....{ // cols
for j....{ // rows
p1i=img_hsv.at<uchar>(j,i*3+0); // H
p2i=img_hsv.at<uchar>(j,i*3+1); // S
p3i=img_hsv.at<uchar>(j,i*3+2); // V
}
}
Again, the rate of steal increases as the strip size increases. I assume it has something to do with accessing the PC memory resources. I already tries to change the page size, or define the code as critical section, with no success.
The application is Win32 XP or 7 based.
Appreciate your help.
Many thanks,
HBR.
It is normally not necessary to access pixels individually for filtering operations. You left out the details of your algorithm - maybe you can implement it by using an OpenCV function like threshold and related, which will work on the whole image. These methods are optimized for memory access, so you wouldn't have to spend time to track down timing issues like this.

find the same area between 2 images

I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.

Optical flow using opencv

I am using Pyramid Lukas Kanade function of OpenCV to estimate the optical flow. i call the cvGoodFeaturesToTrack and then cvCalcOpticalFlowPyrLK. This is my code:
while(1)
{
...
cvGoodFeaturesToTrack(frameAth,eig_image,tmp_image,cornersA,&corner_count,0.01,5,NULL,3,0.4);
std::cout<<"CORNER COUNT AFTER GOOD FEATURES2TRACK CALL = "<<corner_count<<std::endl;
cvCalcOpticalFlowPyrLK(frameAth,frameBth,pyrA,pyrB,cornersA,cornersB,corner_count,cvSize(win_size,win_size),5,features_found,features_errors,cvTermCriteria( CV_TERMCRIT_ITER| CV_TERMCRIT_EPS,20,0.3 ),CV_LKFLOW_PYR_A_READY|CV_LKFLOW_PYR_B_READY);
cvCopy(frameBth,frameAth,0);
...
}
frameAth is the previous gray frame and frameBth is the current gray frame from a webcam. But when i output the number of good features to track in each frame the number decreases after sum time and keeps decreasing. but if i terminate the program and execute the code again(without disturbing the field of view of the webcam ) a lot more number of points are shown as good features to track...how can for the same field of view and for the same scene the function give such difference in number of points...and the difference is high..eg..number of points as good features to track after 4 minutes of execution is 20 or 50...but when the same program terminated and executed again the number is 500 to 700 initialy but again slowly decreases..i am using opencv for the past 4 months so i am lil new to openCV..please guide me or tell me where i can find a solution...lots of thanx in advance..
You have to call cvGoodFeaturesToTrack once (at the beginning, before loop) to detect good features to track and than track these features using cvCalcOpticalFlowPyrLK. Take a look at default opencv example: OpenCV/samples/cpp/lkdemo.cpp.
You are calling cvGoodFeatureToTrack and passing corner_count by reference. Its value decreases if less features are found. You have to reset the value of corner_count to its initial value before calling cvGoodFeaturesToTrackin each iteration of while loop.