I am adding twitter cards meta tag.
I have tried this by adding dynamic
values for twitter meta tags.
I have page which having twitter card image as large in size like width:720 & height:960.
when i check with twitter card validator then image not get resize it looks bad.
Then i google and found that twitter card will not be cropped image unless they have an exceptional aspect ratio.
what is exceptional ratio ?
How can i find out exceptional ratio for image and use that image as twitter card image meta tag ?
Related
I'm trying to add images to a video using mediaconvert. I used mediaconvert graphic overlay/ image inserter to perform this task. However, the image is overriding the given video in the output for the given duration. I want the image to be still at first and then start the video from the beginning if I add an image and similarly for the rest of the images. Can this be done by using aws-mediaconvert?
Overlays are normally used for things like watermarks and logos or simple sports scores/news tickers etc. Images you want to appear over the top of the video.
You could create a clip of blank video to insert into your output, then apply the overlay to just that?
Another option is to convert the image to a video yourself with ffmpeg and insert that into your output?
Using Zurb Foundation framework. It has a component called Foundation Interchange to serve responsive images.
See https://foundation.zurb.com/sites/docs/interchange.html
While it does serve images based on viewport, it does not support lazy load and we want to lazy load some images using the Intersection Observer API.
See https://developers.google.com/web/fundamentals/performance/lazy-loading-guidance/images-and-video/
Objective:
When we choose to lazy load something and give an IMG the class "lazy", then Foundation Interchange serves a low res/small placeholder image. This part is easy. Then use Intersection Observer to simply change one folder in the path so it then points to the high res image. This is the harder part.
Important note:
Most techniques don’t work because we are loading responsive images so I can’t simply point to one image that varies depending on viewport.
We want to apply a class to any image and make it lazy load another image based on Intersection Observer. It will load a low res small image right away and then swap it out for another high res image later on.
Instead of using data-src like most solutions, we want to change the path to the image.
For example, assume the SRC is:
<img class=”lazy” src="assets/img1/test-blur2.jpg">
I want to have Observer watch and change the image path as follows:
<img class=”lazy” src="assets/img/test-blur2.jpg">
In other words, I want to look for images with class=lazy and delete the “1” after /img, and then show the updated image.
Thanks in advance for any tips
So I am attempting to use Azure Computer Vision OCR to recognize text in a jpg image. The image is about 2000x3000 pixels and is a picture of a contract. I want to get all the text and the bounding boxes. The image DPI is over 300 and it's quality is very clear. I noticed that a lot of text was being skipped so I cropped a section of the image and submitted that instead. This time it recognized text that it did not recognize before. Why would it do this? If the quality of the image never changed and the image was within the bounds of the resolution requirements, why is it skipping texts?
I am using AWS Rekognition to detect text from a pdf that is converted into a jpeg.
The image that I am using has text that is approximately size 10-12 or a regular letter page. However, The font changes throughout the image several times.
Is my lack of detection and low confidence levels due to having a document where the text changes often? Small Font?
Essentially I'd like to know what kind of image/text do I need to have the best results from a detect text algorithm?
DetectText API
can detect up to 50 words in an image
and to be detected, text must be within +/- 30 degrees orientation of the
horizontal axis.
and you are trying to extract a page full of text, that's the problem :)
AWS now provides AWS Textract service that is specifically intended for OCR purposes from images and documents.
I'm currently trying to download the equirectangular images Google displays in their 360 view of Google street view to an image file so that I may display them in VR in Unreal Engine 4. I've tried a few things -
Requesting the panorama tiles from a constructed URL in the format described in the Street View API. This ends up returning a file-not-found error for any pano ID that isn't the example one outlined. Perhaps I'm using the wrong method of getting a panorama ID? I used the following example to extract the pano ID and plugged that into the URL with tileX = tileY = 0 and Zoom level of 1 to no avail.
I've also tried downloading separate 2D images taken at 90-degree angles but when I go to display them on the inside of a cube, the images are misaligned.
There's a tool called UnrealJS that I've been looking into in order to grab the panorama data and save it off, but my inexperience with Node.js and server-side JS has made this a very confusing, fruitless endeavor. Other programs I've looked into that allow you to extract these panoramic images use canvas tags to request the maps API and then save what Google's API writes to the canvas into a buffer. Is this the way to go? UnrealJS does support a bastardized version of HTML that I may be able to use - this, however, is less than ideal.
Streetview panoramas are divided into an equal grid from an equirectangular image. This article explains how to get the url of every tile. For zoom level 5 (the highest resolution) there are 26 by 13 tiles (each tile being 512x512). All that is left is to download every image and draw each one onto a large empty image to their respective place using their position on the grid.
Note: by doing this you will be breaking Google's terms of service