CodeNameOne Gradient Render In IOS - gradient

In codenameone, i use vertical gradient for a button.
When i build IOS version, and install it, at the startup it's ok, but after 1 second, gradient removed and the first color fill.
In android it's ok.
IOS 7.1.2 (11D257)

I strongly suggest you avoid gradient fills. They are VERY slow and take up a lot of RAM!
Use image backgrounds or 9-patch image borders which are both faster and more memory efficient. This is covered in the performance how do I: http://www.codenameone.com/how-do-i---improve-application-performance-or-track-down-performance-issues.html

Related

How to draw a circle with a radial gradient on the canvas?

I need to draw a circle with a radial gradient on the canvas in my custom control (XAML). But Windows::UI::Xaml::Media contains LinearGradientBrush only. I know about design guids but my control requires so feature to the user can see the color gamma.
Below that I need to get:
P.S. I ask how to draw it at least programmaticaly without XAML.
P.P.S. I know that I can draw a picture in some editor with radial gradient and after draw it on the canvas but it seems not good solution for the adaptive design.
You can check the RenderColorPickerHueRing() method for a sample of how you could do it on a CPU in a WriteableBitmap. It's not super fast (at least in the C# version), but at least it saves you from packaging another image with your app or from using DirectX, which is a bit more tricky to get set up correctly and stabilize.

How to improve or customize anti-aliasing of fonts for Qt applications?

I'm using Qt 4.8 for embedded linux and view my application on the device display as well as on my desktop using the -qws flag.
When using a white font above a blue background I experience that the font is really thin and hard to read. This happens only on the hardware/device because we have a limited bit-depth per channel there.
The reason for this is simply the distance between the colors: black (0,0,0) on white (255,255,255) has a larger distance than white (255,255,255) on blue (100,160,250) so the contrast is just smaller. When the anti-aliasing algorithm computes values between the two colors the outer pixels might have values too close to the background. Take a look at the pointed to pixels in my image: the gray one (stemming from black) above the white background can be seen better than the gray one (stemming from white) above the blue background:
On my PC's display it looks still fine. Since the hardware of the device looses precision of the color channels (5 to 6 bits instead of 8) the effect is huge and the font looks ugly.
What I'm interested in:
Can I do anything about the anti-aliasing algorithm, maybe change parameters so that the "contrast" is increased? For example, the anit-aliasing could use the pixel-perfect font and only "smear" it outwards instead of dampening the intenisty of present pixels.
Should I use pre-rendered fonts in contrast to true type fonts? I compiled Qt for embedded with TTF support and like it because I have access to more fonts this way but should I be using another format like Qt's PFA or PFB?
Should I simply ask my designer to give me another (maybe thicker) font or to change the background color to a darker one? This of course is not preferred ;-)

How to minimize GPU over draw in Google glass?

I am trying to understand how the the recently announced "GPU over draw" feature works. Why are certain parts of the screen drawn twice or thrice? How does this really work? Does this have anything to do with nesting of layouts? How to minimize this over draw. In windows phone we have an option like Bitmapcachemode which will cache the redraw and prevent re drawing over and over again. Is there anything similar to this in Android? (The snippet below is from official Google docs)
With the latest version of glass, developers have an option of
turning on GPU over draw.When you turn this setting on, the system
will color in each pixel on the screen depending on how many times it
was drawn in the last paint cycle. This setting helps you debug
performance issues with deeply nested layouts or complex paint logic.
Pixels drawn in their original color were only drawn once.
Pixels shaded in blue were drawn twice.
Pixels shaded in green were drawn three times.
Pixels shaded in light red were drawn four times.
Pixels shaded in dark red were drawn five or more times.
Source - Google official docs.
The GPU overdraw feature is simply a debugging tool for visualizing overdraw. Excessive overdraw can cause poor drawing/animation performance as it eats up time on the UI thread.
This feature has been present in Android for some time, Glass simply exposed a menu option to turn it on. See the "Visualizing overdraw" section of http://www.curious-creature.org/docs/android-performance-case-study-1.html for more information.
Overdraw is not necessarily caused by nested layouts. Overdraw occurs when you have views over other views that draw to the same region of the screen. For instance, it is common to set a background on your activity, but then have a full screen view that also has a background. In this instance, you are drawing every pixel on the screen at least 2 times. To fix this specific issue, you can remove the background on the activity since it is never visible due to the child view.
Currently Android does not have the ability to automatically detect and prevent overdraw, so it is on the developer to account for this in their implementation.

is it possible to use iPhone 4/5's full native resolution instead of using the half-size coordinates of older iPhones

The problem is, Cocos seems to think retina devices are the same resolution as standard devices. If you draw a sprite at 768,1024 on the ipad, it will put it at the upper right corner. If you draw a sprite at 768,1024 on the retina ipad, content scaling makes it so it also puts it in the upper right corner. Yet if you draw a sprite on the retina iphone 5 at 640,1136 it isn't in the upper right corner, its off the screen by 2x the distance. In order to put it in the corner you have to draw it at 320,568 because of the content scaling.
I do have a Default-568h#2x.png image, and I am on version 2.1.
My question is: Is there a way to get Cocos2d to make it so that drawing a sprite at 640,1136 on an iPhone5 will result in the sprite being in the upper right corner?
Is it possible to set up a cocos2d custom GL projection and set winSizeInPoints to equal winSizeInPixels and content scale 1.0 so that you can use the iPhone 4/5's full native resolution instead of using the half-size coordinates of older iPhones.
You can easily do this by changing the iPad suffix to use "-hd" via CCFileUtils. That way iPad devices will load the regular -hd assets for Retina iPhones.
Update regarding comments:
The positions on devices is measured in points, not pixels. So a non-retina iPad and a retina iPad both have a point resolution of 1024x768. This is great exactly because it makes adapting to screens with different pixel densities a no-brainer. This also works on the iPhone devices.
My suspicion is that you simply haven't added the Default-568h#2x.png launch image to your project yet, which may cause the widescreen devices to be treated differently.
And specifically on older cocos2d versions dated before iPhone 5 there's a bug that will treat the iPhone 5 as a non-Retina device, proper default image or not.
Yes You can share, easy way to do this is put all common sprite in sprite sheet and in runtime check if iPad, if yes then load iPhone HD sheet. We did this in many project and worked.
if(IS_IPAD)
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"GameSpriteSheet-hd.plist"];
}
else
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"GameSpriteSheet.plist"];
}
For background image, good to have separate iPad image, if scaling not look bad then you can scale image at runtime.
As far as I can understand what you are trying to do, the following approach should work:
call [director enableRetinaDisplay:NO] when setting up your CCDirector;
hack or override the following methods in CCDirector's so that winSizeInPixels is defines as the full screen resolution:
a. setContentScaleFactor:;
b. reshapeProjection:;
c. setView:;
Step 1 will make sure that no scaling factor is ever applied when rendering sprites or doing calculations; Step 2 will ensure that the full screen resolution is used whenever required (e.g., when defining the projection, but likely elsewhere as well).
About Step 2, you will notice that all listed methods show a statement like this:
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * __ccContentScaleFactor, winSizeInPoints_.height * __ccContentScaleFactor );
__ccContentScaleFactor is made equal to 1 by step 1, and you should leave it like that; you could, e.g. customise winSizeInPixels calculation to your aim, like this:
if (<IPHONE_4INCHES>)
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * 2, winSizeInPoints_.height * 2 );
else
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * __ccContentScaleFactor, winSizeInPoints_.height * __ccContentScaleFactor );
Defining a custom projection would unfortunately not work because winSizeInPixels is always calculated based on __ccContentScaleFactor; but, __ccContentScaleFactor is also used everywhere in Cocos2D to position/size sprites and the likes.
A final note on implementation, you could hack this changes into the existing CCDirectorIOS class or you could derive from it your own MYCCDirectorIOS and override the methods there.
Hope it helps.

Anti-aliasing in OpenGL

I just started with OpenGL programming and I am building a clock application. I want it to look something simple like this: http://i.stack.imgur.com/E73ap.jpg
However, my application looks very "un-anti-aliased" : http://i.stack.imgur.com/LUx2v.png
I tried the GL_SMOOTH_POLYGON method mentioned in the Red Book. However that doesn't seem to do a thing.
I am working on a laptop with Intel integrated graphics. The card doesn't support things like GL_ARB_multisample.
What are my options at this point to my app look anti-aliased?
Intel integrated videocards are notorious for their lack of support for OpenGL antialiasing. You can work around that, however.
First option: Manual supersampling
Make a texture 2x times as big as the screen. Render your scene to the texture via FBO, then render the texture at half size so it fills the screen, with bilinear interpolation. Can be very slow (in complex scenes) due to the 4x increase in pixels to draw.
Will result in weak antialiasing (so I don't recommend it for desktop software like your clock). See for yourself:
Second option: (advanced)
Use a shader to perform Morphological Antialiasing. This is a new technique and I don't know how easy it is to implement. It's used by some advanced games.
Third option:
Use textures and bilinear interpolation to your advantage by emulating OpenGL's primitives via textures. The technique is described here.
Fourth option:
Use a separate texture for every element of your clock.
For example, for your hour-arrow, don't use a flat black GL_POLYGON shaped like your arrow. Instead, use a rotated GL_QUAD, textured with a hour-arrow image drawn in an image program. Then bilinear interpolation will take care of antialiasing it as you rotate it.
This option would take the least effort and looks very well.
Fifth option:
Use a library that supports software rendering -
Qt
Cairo
Windows GDI+
WPF
XRender
etc
Such libraries contain their own algorithms for antialiased rendering, so they don't depend on your videocard for antialiasing. The advantages are:
Will render the same on every platform. (this is not guaranteed with OpenGL in various cases - for example, the thick diagonal "tick" lines in your screenshot are rendered as parallelograms, rather than rectangles)
Has a big bunch of convenient drawing functions ("drawArc", "drawText", "drawConcavePolygon", and those will support gradients and borders. also you get things like an Image class.)
Some, like Qt, will provide much more desktop-app type functionality. This can be very useful even for a clock app. For example:
in an OpenGL app you'd probably loop every 20msec and re-render the clock, and not even think twice. This would hog unnecessary CPU cycles, and wake up the CPU on a laptop, depleting the battery. By contrast, Qt is very intelligent about when it must redraw parts of your clock (e.g., when the right half of the clock stops being covered by a window, or when your clock moves the minute-arrow one step).
once you get to implementing, e.g. a tray icon, or a settings dialog, for your clock, a library like Qt can make it a snap. It's nice to use the same library for everything.
The disadvantage is much worse performance, but that doesn't matter at all for a clock app, and it turns around when you take into account the intelligent-redrawing functionality I mentioned.
For something like a clock app, the fifth option is very much recommended. OpenGL is mainly useful for games, 3D software and intense graphical stuff like music visualizers. For desktop apps, it's too low-level and the implementations differ too much.
Draw it into a framebuffer object at twice (or more) the final resolution and then use that image as a texture for a single quad drawn in the actual window.