I have multiple scenes in my mobile phone, in one scene the aspect ratio must be 16:10(vertical), in other scene it must be horizontal. I set the aspect ratio via Unity's GUI, but when I do this, the other scene is 16:10 too. Is there a way where I can change the aspect ratio, from the code? When that scene is loaded, the aspect ratio changes.
So what you want to do is:
First go to this Unity Docs link and understand the use of canvas scaler and the options provided. Basically you can set the UI Scale Mode to Scale with Screen Size and set the resolution required. In my case I set it to 1920x1080 (see image below) so that each screen has the same resolution across scenes. Note that this must be applied to every canvas within the scenes.
This will allow you to configure any specific aspect ratio in terms of resolution across different scenes, as required. I hope this helps.
Screen.orientation = ScreenOrientation.LandscapeLeft;
Screen.orientation = ScreenOrientation.Portrait;
With these two codes I did, what I want to do. If the scene is playable horizontal, I used the first code, and for vertical, I used the second code. It worked like a charm
Related
In coin 3D, I have a scene setup, and the user can view the scene from various directions using a X,Y and Z and this will move the camera.
when these buttons are selected, I would like the viewer to Basically crop the output and the camera needs to zoom is as close as possible so that I can see the rendering in full at the current angle of viewing.
I have tried various things, including setting up the Camera->heightangle
Setting up Camera->Viewall, but the viewall will make sure the whole scene is visible in all 3 dimensions.
I need a better solution.
please assit.
When in cocos2d-x you set ResolutionPolicy::SHOW_ALL then it might appear black areas from top-bottom or from left-right sides. Can I cover the black area with some nice images?
I don't think you can just add something into those back areas.
Instead the solution is to build a scene which already contain the nice images you want to add. The steps are:
Use this inside your AppDelegate::applicationDidFinishLaunching() to detect screen size:
CCSize frameSize = pEGLView->getFrameSize();
Set a design resolution proportional to this frameSize maintaining its aspect ratio.
Put your "content" in the mid. Then, you have to calculate where are the "black areas" and add sprites to cover them. Keep in mind that for different screen the holes can be different, so you need to do some maths there and properly cover different hole sizes.
So, doing ResolutionPolicy::SHOW_ALL sets the openGL view to that size. That's why that can't be done with that.
On the other hand, there are many ways to tackle this.
What i did was :
1. Don't set the ResolutionPolicy.
Use a layer for those nice images/effects.
Create a new layer in that same scene and set the width and height of that layer according to aspect ratio of your content. And make this as your primary game view.
I wish to draw a period or dot or point in GraphicsView that:
has an arbitrary size, which works like the radius of a circle,
is affected by scaling transformations in that its position is changed according to the current scale,
but its arbitrary size is NOT affected by scaling.
The problem I'm specifically tackling is depicting celestial bodies in a solar system visualizer. I want to make things with the proper distances from each other, but space is unimaginably empty - trying to depict an Earth-sized object with a proper radius would make it very hard for the viewer to actually see anything if said user zoomed out enough to see other planets. Therefore, I would like to mark the locations with dots that don't scale in diameter, while everything else (like orbital paths, distances) scales.
I have tried to use ItemIgnoresTranformations, but that makes the object ignore both the size changes and the location changes when scale is altered. I want the object to be noticeable regardless of scale, but at the same time for it to be in its proper place.
Alternative solutions welcome.
EDIT1:
The new code looks like so:
ellipse2 = scene->addEllipse(0, 0, body.radius,body.radius,blackPen,greenBrush);
ellipse2->setFlag(QGraphicsItem::ItemIgnoresTransformations);
ellipse2->setPos(system.starX+body.getX(date2days(game.date))-body.radius/2.0,
system.starY+body.getY(date2days(game.date))-body.radius/2.0);
Previously, the position was simply put in the the place of the 0s in the addElipse() call. However, there is a problem - the planets' movements don't quite match the plotted orbital paths (I am currently simplifying to perfect circles with constant angular speed, rather than elliptical paths with variable angular speed). The actual paths seem to be shifted by some unknown (but scale-dependent) amount towards the top-left.
Here is how it looks unzoomed:
And here is how it looks zoomed:
This problem does not occur if the item is affected by transformations. What gives?
Found the problem. I needed to adjust for planetary radius in the rect, not the pos.
The correct code for this looks like:
ellipse2 = scene->addEllipse(-body.radius/2,
-body.radius/2,
body.radius,body.radius,blackPen,greenBrush);
ellipse2->setFlag(QGraphicsItem::ItemIgnoresTransformations);
ellipse2->setPos(system.starX+body.getX(date2days(game.date)),
system.starY+body.getY(date2days(game.date)));
I'm writing a small CAD application in Qt 5.1 and I'm trying to figure out how to get the coordinates of the QGraphicsScene to correspond to real-world dimensions. I intend to keep coordinates in their original form of (value, unit) to preserve data when switching from mm to inches, for example.
I've found in their documentation (http://qt-project.org/doc/qt-4.8/coordsys.html) that the default unit corresponds to 1 pixel on pixel-based paint devices and 1/72 inch for print-based devices.
Is this the conversion I should use (72 units per inch)? That's not particularly convenient... Is there a better way to associate real-world dimensions with the coordinates in a QGraphicsScene? Can this conversion be adjusted?
Thanks :)
Relativity is the important thing here and there's no direct correlation between what you see in a QGraphicsScene and the real-world, unless you decide to create a representative scale.
You can happily create a molecular model to scale, of atoms inside an object, or you could model a solar system that allows you to travel between stars and galaxies. What is important is what you as the developer decide your scale is going to be.
In a solar system, going from a GraphicsScene coordinate position of (0,0) to (10,0) could mean a distance of 10 meters or 10 miles. So long as the objects in the scene are modelled to the same scale and positioned in the scene using that scale, that's what matters.
When it comes to printing, I agree with #FrankOsterfeld that it's a matter of scaling the view on the scene. If you want a direct correlation between what you see on screen and what you print on paper, you'd be better off not using a QGraphicsScene here.
I just want to do a simple animation (for example in C++ using OpenGL) of some moving object - let's say simple horizontal movement of a square from left to right.
In OpenGL, I can use "double-buffering" method and let's say a user (running my application with the animation) have turned "vertical sync" on - so I can call some function every time when a monitor refreshes itself (I can achieve that for example using Qt toolkit and its function "swapBuffers").
So, I think, the "smoothest" animation that I can achieve, is to "move the square by for example 1 pixel (can be other values) every time monitor refreshes", so at each "frame" the square is 1 pixel further - "I HAVE TESTED THIS, AND IT SURELY WORKS SMOOTHLY".
But the problem arises when I want to have "separate" thread for "game logic" (moving the square by 1 pixel to the right) and for "animation" (displaying current position of the square on the screen). Because let's say the game logic thread is a while loop where I move the square by 1 pixel and then "sleep" the thread for some time, for example 10 milliseconds, and my monitor refreshes for example every 16 milliseconds - the movement of the square "won't be 100% smooth" because sometimes the monitor will refresh two times where the square moves by only by 1 pixel and not by 2 pixels (because there two "different" frequencies of monitor and game logic thread) - and the movement will look "little jerky".
So, logically, I could stay with the first super smooth method, but, it cannot be used in for example "multiplayer" (for example "server-client") games - because different computers have different monitor frequencies (so I should use different threads for game logic (on the server) and for animation (on the clients) ).
So my question is:
Is there some method, using different threads for game logic and animation, which do "100% smooth" animation of some moving object and if some exists, please describe it here, or when I just had some "more complex scene to render", I just would not see that "little jerky movement" which I see now, when I move some simple square horizontally, and I deeply concentrate on it :) ?
Well, this is actually typical separate game-loop behavior. You manage all you physics (movement) related actions in one thread, letting the render thread to do its work. This is actually desirable.
Don´t forget this way of implementation of game loop is to have maximum available frame rate while preserving constant physics speed. At higher FPS, you can not see this effect by any chance, if there is not any other code related problem. Some hooking between framerate and physics for example.
If you want to achieve what you describe as perfect smoothness, you could synchronize your physics engine with VSync. Simply do all your physics BEFORE refresh kicks in, than wait for another.
But this all applies to constant speed objects. If you have object with dynamic speed, you can never know when to draw it to be "in sync". Same problem arises then you want multiple object with different constant speeds.
Also, this is NOT what you want in complex scenes. The whole idea of V-sync is to limit screen tearing effect. You should definitely NOT hook your physics or rendering code to display refresh rate. You want you physics code to run independent of users display refresh rate. This could be REAL pain in multiplayer games for example. For start, look at this page: How A Game Loop Works
EDIT:
I say your vision of perfect smoothness is unrealistic. You can mask it using techniques Kevin wrote. But you will always struggle with HW limits as refresh rate, or display pixelation. For example, you have window of 640x480 px. Now, you want your object to move horizontally. You can move your object by vector heading towards bottom right corner, BUT you must increment object coordinates by float number (640/480). But in rendering, you go to integers. So your object moves jagged. No way around this. In small speed, you can notice it. You can blur it, or make it move faster, but never get rid of it...
Allow your object to move by fractions of a pixel. In OpenGL, this can be done for your example of a square by drawing the square onto a texture (i.e. a one-pixel or larger border), rather than letting it be just the polygon edge. If you are rendering 2D sprite graphics, then you get this pretty much automatically (but if you have 1:1 pixel art it will be blurred/sharp/blurred as it crosses pixel boundaries).
Smooth (antialias) the polygon edge (GL_POLYGON_SMOOTH). The problem with this technique is that it does not work with Z-buffer-based rendering since it causes transparency, but if you are doing a 2D scene you can make sure to always draw back-to-front.
Enable multisample/supersample antialiasing, which is more expensive but doesn't have the above problem.
Make your object have a sufficiently animated appearance that the pixel shifts aren't easy to notice because there's much more going on at that edge (i.e. it is itself moving in place at much more than 1 pixel/frame).
Make your game sufficiently complex and engrossing that players are distracted from looking at the pixels. :)