I can see most phone cameras put at the side of the resolution if its wide. So its easy to see sizes with the text such as: 2560x1536 (wide)
I know as standard general knowledge that 16:9 is clasified as "wide".
My question is how can I calculate if a resolution is "wide" or not?
I know already how to get the aspect ratio such as 16:9, 4:3, etc. But not idea how to then decide if an aspect ratio is wide,
So for example why are these classified as wide?
2048x1232 = 128:77
1600x960 = 5:3
If the first part of the aspect ratio divided by the second part is > 1.3334 then it is considered wide.
So 4:3 is 4/3 = 1.333, not wide.
16:9 is 16/9 = 1.777, wide
16/9 = 1,77
5/3 = 1,66
128/77 = 1,66
If you define 16:9 as "wide", then 5/3 and 128/77 are not wide because they have a smaller aspect ratio.
Yet, I believe the assumption for wide is 1,30 somewhere, so then 5:3 and 128:77 would be wide
Related
I have closely studied the MS documentation on EMF files and from the definitions for the 3 header types I can't see how to convert from logical coords (which the graphics records coords are stored as) to device coords. The header has a Frame part that specifies the page size surrounding (but not necessarily bounding) the composite image in 0.01mm units; and a Bounds part that specifies the actual bounds of the composite image in logical units. And finally there are the Device and Millimeters parts that specify the size of the recording device.
From these there seems no way that calculating the ratio to convert from logical coords to device coords is possible.
I must be missing something simple :-)
Think I sussed it: you use the records:
EMR_SETVIEWPORTEXTEX - device units
EMR_SETVIEWPORTORGEX - (ditto)
EMR_SETWINDOWEXTEX - logical units
EMR_SETWINDOWORGEX - (ditto)
EMR_SETWORLDTRANSFORM
Yes, the Bounds header property is specified as the actual bounds of the composite image (in logical units) but, on investigating Inkscape and Adobe Illustrator created emf's, I find that they do not adhere to this.
After creating your DC (createDC), use getdevicecaps to get the total number of dots (raster lines) available for your DC. Horzres for width, Vertres for height. The dots aren't square. Then after reading your EMF file with getenhmetafile, use getenhmetafileheader to get the header record. You then look at either rclbound or rclframe in the header record. The second rectangle is a multiple of the first rectangle. For emfs created by powerpoint, the top and left is zero in my experience, so you focus on the bottom and right. The ratio of the two is your aspect ratio. You use that ratio to calculate the rectangle in DC units that has the same aspect ratio as rclbound, but likely adds margins all around so your image doesn't go right to the edge of your device. That rectangle, with units that fall within the range provide by vertres and horzres is the third arguement to the playenhmetafile command where your finish up. In sum, you convert from the EMF logical units to the DC logical units by using vertres and horzres (from your DC) combined with the aspect ratio you calculate (from your EMF).
I have tried to follow the same simple logic as for cylinder, box … , so just by defining the position for textNode, but it is not working.
func makeText(text3D: String, position: SCNVector3, depthOfText: CGFloat, color: NSColor, transparency: CGFloat) -> SCNNode
{
let textTodraw = SCNText(string: text3D, extrusionDepth: depthOfText)
textTodraw.firstMaterial?.transparency = transparency
textTodraw.firstMaterial?.diffuse.contents = color
let textNode = SCNNode(geometry: textTodraw)
textNode.position = position
return textNode
}
The default font for SCNText is 36 point Helvetica, and a "point" in font size is the same as a unit of scene space. (Well, of local space for the node containing the SCNText geometry. But unless you've set a scale factor on your node, local space units are the same as scene space units.) That means even a short label can be tens of units tall and hundreds of units wide.
It's typical to build SceneKit scenes with smaller scope — for example, simple test scenes like you might throw together in a Swift playground using the default sizes for SCNBox, SCNSphere, etc might be only 3-4 units wide. (And if you're using SceneKit with ARKit, scene units are meters, so some text in 36 "point" font is the size of a few office blocks downtown.)
Also, the anchor point for a text geometry relative to its containing node is at the lower left corner of the text. Put all this together and it's entirely possible that there are giant letters looming over the rest of your scene, hiding just out of camera view.
Note that if you try to fix this by setting a much smaller font on your SCNText, the text might get jagged and chunky. That's because the flatness property is measured relative to the point size of the text (more precisely, it's measured in a coordinate system where one unit == one point of text size). So if you choose a font size that'd be tiny by screen/print standards, you'll need to scale down the flatness accordingly to still get smooth curves in your letters.
Alternatively, you can leave font sizes and flatness alone — instead, set a scale factor on the node containing the text geometry, or set that node's pivot to a transform matrix that scales down its content. For example, if you set a scale factor of 1/72, one unit of scene space is the same as one "inch" (72 points) of text height — depending on the other sizes in your scene, that might make it a bit easier to think of font sizes the way you do in 2D.
The fact is you generally just use "small numbers" for font sizes in SceneKit.
In 3D you always use real meters. A humanoid robot must be about "2" units tall, a car is about "3" units long and so on.
Very typical sizes for the font is about "0.1"
Note that the flatness value is SMALL, usually about one hundredth the size of the font. (Which is obvious, it's how long the line segments are.)
Typical:
t.font = UIFont(name: "Blah", size: 0.10)
t.flatness = 0.001
Set the flatness to about 1/4 the size of the font (hence, 0.025 in the example) to understand what "flatness" is.
I would never change the scale of the type node, that's a bad idea for many reasons. There's absolutely no reason to do so, and it makes it very difficult to genuinely set the flatness appropriately.
But note ...
That being said, on the different platforms and different versions, SCNText() often does a basically bad job drawing text, and it can go to hell at small numbers. So yeah, you may indeed have to scale in practice, if the text construction is crap at (very) small values :/
I'm drawing text using QPainter on a QImage, and then saving it to TIFF.
I need to increase the DPI to 300, which should make the text bigger in terms of pixels (for the same point size).
You can try using QImage::setDotsPerMeterY() and QImage::setDotsPerMeterX(). DPI means "dots per inch". 1 inch equals 0.0254 meters. So you should be able to convert to dots per meter (dpm):
int dpm = 300 / 0.0254; // ~300 DPI
image.setDotsPerMeterX(dpm);
image.setDotsPerMeterY(dpm);
It's not going to be exactly 300DPI (it's actually 299.9994), since the functions only work with integral values. But for all intents and purposes, it's good enough (299.9994 vs 300 is quite good, I'd say.)
There are 39.37 inches in a meter. So:
Setting:
qimage.setDotsPerMeterX(xdpi * 39.37);
qimage.setDotsPerMeterY(ydpi * 39.37);
Getting:
xdpi = qimage.dotsPerMeterX() / 39.37;
ydpi = qimage.dotsPerMeterY() / 39.37;
I want to implement high-quality raycasting volume rendering using OpenGL、GLSL and C++. And I use image-order volume rendering. During a step of raycasting volume rendering called compositing, I use the following formula (front-back-order):
When I read the book 《Real-time volume graphics》 page 16,I see that we need to do Opacity Correction if sample rate changes:
And use this opacity value to replace the old opacity value.In this formula,
is the new sample distance,and the △x is the old sample distance.
My question is : How do I determine △x in my program?
Say your original volume had a resolution of V=(512, 256, 127) then when casting a ray in the direction of (rx, ry, rz), the sample distance is 1/|r·V|. However say in your raycaster you're largely oversampling, say you sample the volume at 3× the sample distance V'=(1526, 768, 384), the oversampled sample distance is 1/|r·V'|, and hence the ratio of sampling rates is 1/3. This is the exponent to add.
Note that the exponent is only noticeable with low density volumes, i.e. in the case of medical images low contrast soft tissue. If you're imaging bones or dense tissue then it makes almost no difference (BTDT).
datenwolf is correct, but there is one piece of missing information. A transfer function is a mapping between scalar values and colors. In RGBA, the range of values is between 0 and 1. You, as the implementer, get to choose what the actual value of opacity translates to, and that is where the original sample distance comes in.
Say you have the unit cube, and you choose 2 samples that translates to a sample distance of 0.5, if you trace a ray in a direction orthogonal to a face. If the alpha value is 0.1 everywhere in the volume, the resulting alpha is 0.1 + 0.1 * (1.0 -0.1) = 0.19. If you chose 3 samples, then the resulting alpha is one more composite from the previous choice: 0.19 + 0.1 * (1-0.19) = 0.271. Thus, your choice of the original sample distance influences the outcome. Chose wisely.
I am developing an android game in cocos2d. How many different sizes of images do I need to support android devices and tablets?
I have never used that engine but if you mean by image size, device screen size, then you should use an scale.
I took for base the most bigger I could, 1280x800, the one that's on my tablet, just to be more precise in tablets too.
I apply the scale in (X,Y) to every image size and every operation that screen or screen size it's involved. i.e:
soldierBitmapX.move(movement*scaleX)
soldierBitmapY.move(movement*scaleY)
scaleX and scaleY represents your scale and movement represent how many pixel your soodier will move.
This is an example to understand how to apply the scale. I don't recommend you to move your sprites with this operation but have in mind if you should apply the scale.
You can apply this to every screen possible and your game will feet exactly in all of it. Beware of for example QVGA screens, more "squared" in comparision with other standards and very small.
EDIT (how to get the scale):
_xMultiplier = (_screenWidth/(1280.0f/100.0f))/100.0f;
_yMultiplier = (_screenHeight/(800.0f/100.0f))/100.0f;
matrix.setScale(_xMultiplier, _yMultiplier);
this is an example of the scale applied to the matrix that we'll use.
Through ScaleX and ScaleY Property you can easily scale the images .....as for example you take for tablet size is 1280 * 800 ,yo u can scale that sprite and use it; you can also use that image for smaller resolution e.g. 320 * 480.....