I'm mobile game developer on Marmalade newly. So I wanna get screen height and width for resolution of different screen size. What's difference between Iw2DGetSurfaceWidth() and IwGxGetScreenWidth(), is it totaly the same or if not which one is better?
IwGx and its subcomponents use three different types of surfaces.
Device, which holds the width and height of the device, without worrying about the screen orentation.
Screen, which is the same as Device but switches the width and height when the device orientation is changed.
Surface, which is made by the programmer. It's up to you to create surfaces. A screen can have several surfaces or none at all, like images. Its a rectangular object used for the UI.
For more information you can look it up in the documentation:
http://docs.madewithmarmalade.com/native/api_reference/iwgxapidocumentation/iwgxapioverview/screenmanipulation.html
and
http://docs.madewithmarmalade.com/native/api_reference/iw2dapidocumentation/iw2dapioverview/usingsurfaces.html
Related
I am rendering text with ID2D1HwndRenderTarget.
When there is a change of the UI window size, I want to prevent the stretch of the text being rendered - so it will be unchanged until I will directly make a rendering command.
On Direct2D documentation the behavior is described:
If EndDraw presents the buffer, this bitmap is stretched to cover the
surface where it is presented: the entire client area of the window
I know the ID2D1HwndRenderTarget::Resize method but I don't want to update the size immediately, just going to use it later on according to my program needs.
How can I ignore windows events to prevent this visual stretch?
You are already ignoring size change messages, and that's why surface size does not match client area size when presenting. You can try to offset this effect by setting target resolution according to "client area / current target size" factor, right before doing EndDraw(). I have no idea if that will help, or what will happen to uncovered window area outside of current target rectangle.
In my appDelegate class i have code something like this,
while using cocos studio
auto glview = director->getOpenGLView();
if(!glview) {
glview = GLViewImpl::createWithRect("HelloCpp", Rect(0, 0, 960, 640));
director->setOpenGLView(glview);
}
director->getOpenGLView()->setDesignResolutionSize(960, 640, ResolutionPolicy::SHOW_ALL);
Where as in a normal cocos2dx project
with out cocos studio
the code in appDelegate class is like this.
auto glview = director->getOpenGLView();
if(!glview) {
glview = GLViewImpl::create("My Game");
director->setOpenGLView(glview);
}
My doubt is, is it mandatory to setDesignResolutionSize, and also should it be the same for every device size???
in cocos2dx all coordinates are based on design resolutions size. By doing so you can use the same coordinate system on all screen sizes. resolution policy SHOWALL displays the entire area without any change however, since not all devices are in same size, you will have black boarders depending on the screen size.
NO_BORDER policy will crop some of the surface but you will loose some part of the world depending on the device screen size. If you planning to do this then you have to make sure that important parts are in the safe zone so that your game is not effected. There is one thing you have to pay attention if you are using NO_BORDER policy. The design resolution will not be same as the visible area. If this is what you decided to go for a policy choice then you need visibleSize() and visibleOrigin() functions to help you out the location of sprites and game objects.
Perhaps the best method is to use FIXED_HEIGHT or FIXED_WIDTH policies. If you choose FIXED_HEIGHT policy then you simply telling the cocos2dx that you want to display the entire height with in the visible area of any device. The width can be cut off depending on the device. The width size of the Design resolution size will be recalculated. you can take this approach if you don't care about the width of the game. in other words, if your game requires the entire height for the game area FIXED_HEIGHT is your policy.
As you may have guessed FIXED_WIDTH policy works similar but for the width rather then the height.
Both FIXED_WIDTH and FIXED_HEIGHT modifies design resolution's width or height depending on what you choose. good thing about this is that the produced design resolution will be identical to the visible area. This makes it easier for you to locate, position your sprites on your game.
Some more info on these topics can be found on following links. with much clear explanations. although they are abit old, they give you an idea on how it all works.
http://www.cocos2d-x.org/wiki/Multi_resolution_support
http://cocos2d-x.org/wiki/Detailed_explanation_of_Cocos2d-x_Multi-resolution_adaptation
I use OpenGl. I would like to create a menu-control, but for this I need a constant resolution-control. I mean that I can set the position of a button by giving a coordinate, in 1024x768. But what if my window doesn't in it. And in full screen mode I hasn't found a method to change the resolution, nevertheless I can get it. So I got the screen width/height, the window width/height and 4 coordinates in 1024x768 for a rectangle. What should I do?
I am haveing trouble understanding the concept of the UpdateLayaredWindow api, how it works and how to implement it. Say for example I want to override CFrameWnd and draw a custom, alpha blended frame with UpdateLayeredWindow, as I understand it, the only way to draw child controls is to either: Blend them to the frame's Bitmap buffer (Created with CreateCompatibleBitmap) and redraw the whole frame, or create another window that sits ontop of the layered frame and draws child controls regularly (which defeats the whole idea of layered windows, because the window region wouldn't update anyway).
If I use the first method, the whole frame is redrawn - surely this is inpractical for a large application..? Or is it that the frame is constantly updated anyway so modifying the bitmap buffer wouldn't cause extra redrawing.
An example of a window similar to what I would like to achieve is the Skype notification box/incoming call box. A translucent frame/window with child contorls sitting ontop, that you can move around the screen.
In a practical, commercial world, how do I do it? Please don't refer me to the documentation, I know what it says; I need someone to explain practical methods of the infrastructure I should use to implement this.
Thanks.
It is very unclear exactly what aspect of layered windows gives you a problem, I'll just noodle on about how they are implemented and explaining their limitations from that.
Layered windows are implemented by using a hardware feature of the video adapter called "layers". The adapter has the basic ability to combine the pixels from distinct chunks of video memory, mixing them before sending them to the monitor. Obvious examples of that are the mouse cursor, it gets super-imposed on the pixels of the desktop frame buffer so it doesn't take a lot of effort to animate it when you move the mouse. Or the overlay used to display a video, the video stream decoder writes the video pixels directly to a separate frame buffer. Or the shadow cast by the frame of a toplevel window on top of the windows behind it.
The video adapter allows a few simple logical operations on the two pixel values when combining their values. The first one is an obvious one, the mixing operation that lets some of the pixel value overlap the background pixel. That effect provides opacity, you can see the background partially behind the window.
The second one is color-keying, the kind of effect you see used when the weather man on TV stands in front of a weather map. He actually stands in front of a green screen, the camera mixing panel filters out the green and replaces it with the pixels from the weather map. That effect provides pure transparency.
You see this back in the arguments passed to UpdateLayeredWindow(), the function you must call in your code to setup the layered window. The dwFlags argument select the basic operations supported by the video hardware, ULW_ALPHA flag enables the opacity effect, the ULW_COLORKEY flag enables the transparency effect. The transparency effect requires the color key, that's specified with the crKey argument value. The opacity effect is controlled with the pblend argument. This one is built for future expansion, one that hasn't happened yet. The only interesting field in the BLENDFUNCTION struct is SourceConstantAlpha, it controls the amount of opacity.
So a basic effect available for a layered window is opacity, overlapping the background windows and leaving the partially visible. One restriction to that the entire window is partially opaque, including the border and the title bar. That doesn't look good, you typically want to create a borderless window and take on the burden of creating your own window frame. Requires a bunch of code btw.
And a basic effect is transparency, completely hiding parts of a window. You often want to combine the two effects and that requires two layered windows. One that provides the partial opacity, another on top and owned by the bottom one that displays the parts of the window that are opaque, like the controls. Using the color key to make its background transparent and make the the bottom window visible.
Beyond this, another important feature for custom windows is enabled by SetWindowRgn(). It lets you give the window a shape other than a rectangle. Again it is important to omit the border and title bar, they don't work on a shaped window. The programming effort is to combine these features in a tasteful way that isn't too grossly different from the look-and-feel of windows created by other applications and write the code that paints the replacement window parts and still makes the window functional like a regular window. Things like resizing and moving the window for example, you typically do so by custom handling the WM_NCHITTEST message.
I'm taking a computer graphics course this semester at college and our first assignment is to build a program that works much like Microsoft paint. We need to set options for drawing with shapes of different colors, sizes, and transparency parameters.
I'm having trouble finding information on how to program the ability to draw with a given shape on mouse drag. I'm not asking for the solution in code, but guidance on where to study functions that might accomplish this.
I'm completely new to OpenGL (but not C++) & I own "Computer Graphics with OpenGL" 4th ed. by Hearn & Baker. None of the topics suggest this capability.
What's probably asked from you is creating a single bufferd window, or switching to draw on the front buffer, and draw some shape at the mouse pointers location, when a button is pressed (and dragged), without clearing the frontbuffer inbetween. For added robustness draw to a Frame Buffer Object attached texture, so that dragging some window will not coorupt the user's drawing.
Keywords: Set Viewport to Window size. Ortho projection to window bounds, do not use glClear (except for resetting the picture).