I have been stuck on remapping my mouse position to a new frame for a few days and I am unsure what to do. I will provide images to describe my issue. The main problem is I want to click on an object in my program and the program will highlight the object I select (in 3d space.) I have this working perfectly when my application is in full screen mode. I recently started rendering my scene into a smaller frame so that I can have editor tools on the sides(like unity.) Here is the transition (graphically) i made from working to not working:
So essentially the mouse coordinates go from (0,0) to (screenWidth, screenHeight).. I want to map these coordinates to be from (frameStartX, frameStartY) to (frameStartX + frameWidth, frameStartY + frameHeight). I did some research on linearly transforming a number to scale it to a new range so I thought I could do this:
float frameMousePosX = (mousePosX - 0) / (screenWidth - 0) * ((frameWidth + frameStartX) - frameStartX ) + frameStartX ;
float frameMousePosY = (mousePosY - 0) / (screenHeight - 0) * ((frameHeight +frameStartY) - frameStartY ) + frameStartY;
I assumed this would work but it doesn't. It's not even close. I am really unsure what to do to get this transformation.
Assuming the transformation works, I would want it to read 0,0 at the bottom left of the frameStart which is (x,y) in the image attached and then reach its peak at the top right of the framed scene.
Any help would be extremely appreciated.
Related
I'm creating a simple window manager for future projects and I seem to have run into a problem. I have a snippet of code which is supposed to change the viewport's position to the middle of the window whenever somebody resizes it, and it seems to work completely fine when changing position on the x-axis, as seen here. Unfortunately, it doesn't work on the y-axis, instead showing up at the bottom of the window. here is the code that handles this:
/* create viewport */
if (win->width > win->height)
glViewport((win->width / 2 - win->viewport.width / 2), 0, win->viewport.width, win->viewport.height);
else
/* FIXME: viewport appears at bottom of window, i have no idea why */
glViewport(0, (win->height / 2 - win->viewport.height / 2), win->viewport.width, win->viewport.height);
I have changed a number of variables in the equation but none of them yielded any results. I have ran the equation outside of glViewport and it returns the desired numbers. OpenGL is intentionally changing the viewports position to (0,0) and I have yet to figure out why. if it helps at all, I'm using OpenGL 3.3 and SDL2 on a Windows machine.
If anybody could tell me what I need to do to fix this, I would greatly appreciate it. Please and thank you.
I have run in a similar problem with SDL2 too.
I think the missing part is that you are not considering the aspect ractio value.
Also using SDL2 with Opengl you should consider that the drawable area can be different from the window area.
Assuming w and h the original sizes,
draw_w and _draw_h the current drawable area size,
view_w view_h the viewport area size,
we can calculate it as :
SDL_GL_GetDrawableSize(window,&draw_w,&draw_h);
float ratio = (float)w/(float)h;
if(draw_w/ratio < draw_h)
{
view_w = draw_w;
view_h = (int)((float)view_w/ratio);
}
else
{
view_h = draw_h;
view_w = (int)((float)view_h*ratio);
}
x = (draw_w-view_w)/2;
y = (draw_h-view_h)/2;
glViewport(x, y, view_w, view_h);
I have used a similar function applied to a SDL2 filter event as :
if(event->type==SDL_WINDOWEVENT && (event->window.event==SDL_WINDOWEVENT_SIZE_CHANGED || event->window.event==SDL_WINDOWEVENT_EXPOSED))
I'm developing Flutter plugin which is targeting only Android for now. It's kind of synthesis thing; Users can load audio file into memory, and they can adjust pitch (not pitch shift) and play multiple sound with the least delay using audio library called Oboe.
I managed to get PCM data from audio files which MediaCodec class supports, and also succeeded to handle pitch by manipulating playback via accessing PCM array manually too.
This PCM array is stored as float array, ranging from -1.0 to 1.0. I now want to support panning feature, just like what internal Android class such as SoundPool. I'm planning to follow how SoundPool is handling panning. There are 2 values I have to pass to SoundPool when performing panning effect : left, and right. These 2 values are float, and must range from 0.0 to 1.0.
For example, if I pass (1.0F, 0.0F), then users can hear sound only by left ear. (1.0F, 1.0F) will be normal (center). Panning wasn't problem... until I encountered handling stereo sounds. I know what to do to perform panning with stereo PCM data, but I don't know how to perform natural panning.
If I try to shift all sound to left side, then right channel of sound must be played in left side. In opposite, if I try to shift all sound to right side, then left channel of sound must be played in right side. I also noticed that there is thing called Panning Rule, which means that sound must be a little bit louder when it's shifted to side (about +3dB). I tried to find a way to perform natural panning effect, but I really couldn't find algorithm or reference of it.
Below is structure of float stereo PCM array, I actually didn't modify array when decoding audio files, so it should be common structure
[left_channel_sample_0, right_channel_sample_0, left_channel_sample_1, right_channel_sample_1,
...,
left_channel_sample_n, right_channel_sample_n]
and I have to pass this PCM array to audio stream like c++ code below
void PlayerQueue::renderStereo(float * audioData, int32_t numFrames) {
for(int i = 0; i < numFrames; i++) {
//When audio file is stereo...
if(player->isStereo) {
if((offset + i) * 2 + 1 < player->data.size()) {
audioData[i * 2] += player->data.at((offset + i) * 2);
audioData[i * 2 + 1] += player->data.at((offset + i) * 2 + 1);
} else {
//PCM data reached end
break;
}
} else {
//When audio file is mono...
if(offset + i < player->data.size()) {
audioData[i * 2] += player->data.at(offset + i);
audioData[i * 2 + 1] += player->data.at(offset + i);
} else {
//PCM data reached end
break;
}
}
//Prevent overflow
if(audioData[i * 2] > 1.0)
audioData[i * 2] = 1.0;
else if(audioData[i * 2] < -1.0)
audioData[i * 2] = -1.0;
if(audioData[i * 2 + 1] > 1.0)
audioData[i * 2 + 1] = 1.0;
else if(audioData[i * 2 + 1] < -1.0)
audioData[i * 2 + 1] = -1.0;
}
//Add numFrames to offset, so it can continue playing PCM data in next session
offset += numFrames;
if(offset >= player->data.size()) {
offset = 0;
queueEnded = true;
}
}
I excluded calculation of playback manipulating to simplify code. As you can see, I have to manually pass PCM data to audioData float array. I'm adding PCM data to perform mixing multiple sounds including same sound too.
How to perform panning effect with this PCM array? It will be good if we can follow mechanisms of SoundPool, but it will be fine as long as I can perform panning effect properly. (EX: pan value can be just -1.0 to 1.0, 0 will mean centered)
When applying Panning Rule, what is relationship between PCM and decibel? I know how to make sound louder, but I don't know how to make sound louder with exact decibel. Are there any formula for this?
Pan rules or pan laws are implemented a bit different from manufacturer to manufacturer.
One implementation that is frequently used is that when sounds are panned fully to one side, that side is played at full volume, where as the other side is attenuated fully. if the sound is played at center, both sides are attenuated by roughly 3 decibels.
to do this you can multiply the sound source by the calculated amplitude. e.g. (untested pseudo code)
player->data.at((offset + i) * 2) * 1.0; // left signal at full volume
player->data.at((offset + i) * 2 + 1) * 0.0; // right signal fully attenuated
To get the desired amplitudes you can use the sin function for the left channel and the cos function for the right channel.
notice that when the input to sin and cos is pi/4, that the amplitude is 0.707 on both sides. This will give you your attenuation on both sides of around 3 decibels.
So all that is left to do is to map the range [-1, 1] to the range [0, pi/2]
e.g. assuming you have a value for pan which is in the range [-1, 1]. (untested pseudo code)
pan_mapped = ((pan + 1) / 2.0) * (Math.pi / 2.0);
left_amplitude = sin(pan_mapped);
right_amplitude = cos(pan_mapped);
UPDATE:
Another option frequently used (e.g. ProTools DAW) is to have a pan setting on each side. effectively treating the stereo source as 2 mono sources. This allows you to place the left source freely in the stereo field without affecting the right source.
To do this you would: (untested pseudo code)
left_output += left_source(i) * sin(left_pan)
right_output += left_source(i) * cos(left_pan)
left_output += right_source(i) * sin(right_pan)
right_output += right_source(i) * cos(right_pan)
The setting of these 2 pans are are up to the operator and depend on the recording and desired effect.
How you want to map this to a single pan control is up to you. I would just advise that when the pan is 0 (centred) that the left channel is played only on the left side and the right channel is only played on the right side. Else you would interfere with the original stereo recording.
One possibility would be that the segment [-1, 0) controls the right pan, leaving the left side untouched. and vice versa for [0, 1].
hPi = math.pi / 2.0
def stereoPan(x):
if (x < 0.0):
print("left source:")
print(1.0) # amplitude to left channel
print(0.0) # amplitude to right channel
print("right source:")
print(math.sin(abs(x) * hPi)) # amplitude to left channel
print(math.cos(abs(x) * hPi)) # amplitude to right channel
else:
print("left source:")
print(math.cos(x * hPi)) # amplitude to left channel
print(math.sin(x * hPi)) # amplitude to right channel
print("right source:")
print(0.0) # amplitude to left channel
print(1.0) # amplitude to right channel
The following is not meant to contradict anything in the excellent answer given by #ruff09. I'm just going to add some thoughts and theory that I think is relevant when trying to emulate panning.
I'd like to point out that simply using volume differences has a couple drawbacks. First off, it doesn't match the real world phenomenon. Imagine you are walking down a sidewalk and immediately there on the street, on your right, is a worker with a jackhammer. We could make the sound 100% volume on the right and 0% on the left. But in reality much of what we hear from that source is also coming in the left ear, drowning out other sounds.
If you omit left-ear volume for the jackhammer to obtain maximum right-pan, then even quiet sounds on the left will be audible (which is absurd), since they will not be competing with jackhammer content on that left track. If you do have left-ear volume for the jackhammer, then the volume-based panning effect will swing the location more towards the center. Dilemma!
How do our ears differentiate locations in such situations? I know of two processes that potentially can be incorporated to the panning algorithm to make the panning more "natural." One is a filtering component. High frequencies that match wavelengths that are smaller than the width of our head get attenuated. So, you could add some differential low-pass filtering to your sounds. Another aspect is that in our scenario, the jackhammer sounds reach the right ear a few milliseconds before they reach the left. Thus, you could also add a bit of delay to based on the panning angle. The time-based panning effect works most clearly with frequency content that has wave lengths that are larger than our heads (e.g., some high-pass filtering would also be a component).
There has also been a great deal of work on how the shapes of our ears have differential filtering effects on sounds. I think that we learn to use this as we grow up by subconsciously associating different timbres with different locations (especially pertains to altitude and front vs. back stereo issues).
There are big computation costs, though. So simplifications such as sticking with purely amplitude-based panning is the norm. Thus, for sounds in a 3D world, it is probably best to prefer mono source content for items that need dynamic location changes, and only use stereo content for background music or ambient content that doesn't need dynamic panning based on player location.
I want to do some more experimenting with dynamic time-based panning combined with a bit of amplitude, to see if this can be used effectively with stereo cues. Implementing a dynamic delay is a little tricky, but not as costly as filtering. I'm wondering if there might be ways to record a sound source (preprocess it) to make it more amenable to incorporating real-time filter- and time-based manipulation that result in effective panning.
I'm actually realising a C++ raytracer and I'm confronting a classic problem on raytracing. When putting a high vertical FOV, the shapes get a bigger distortion the nearer they are from the edges.
I know why this distortion happens, but I don't know to resolve it (of course, reducing the FOV is an option but I think that there is something to change in my code). I've been browsing different computing forums but didn't find any way to resolve it.
Here's a screenshot to illustrate my problem.
I think that the problem is that the view plane where I'm projecting my rays isn't actually flat, but I don't know how to resolve this. If you have any tip to resolve it, I'm open to suggestions.
I'm on a right-handed oriented system.
The Camera system vectors, Direction vector and Light vector are normalized.
If you need some code to check something, I'll put it in an answer with the part you ask.
code of ray generation :
// PixelScreenX = (pixelx + 0.5) / imageWidth
// PixelCameraX = (2 ∗ PixelScreenx − 1) ∗
// ImageAspectRatio ∗ tan(fov / 2)
float x = (2 * (i + 0.5f) / (float)options.width - 1) *
options.imageAspectRatio * options.scale;
// PixelScreeny = (pixely + 0.5) / imageHeight
// PixelCameraY = (1 − 2 ∗ PixelScreeny) ∗ tan(fov / 2)
float y = (1 - 2 * (j + 0.5f) / (float)options.height) * options.scale;
Vec3f dir;
options.cameraToWorld.multDirMatrix(Vec3f(x, y, -1), dir);
dir.normalize();
newColor = _renderer->castRay(options.orig, dir, objects, options);
There is nothing wrong with your projection. It produces exactly what it should produce.
Let's consider the following figure to see how all the quantities interact:
We have the camera position, the field of view (as an angle) and the image plane. The image plane is the plane that you are projecting your 3D scene onto. Essentially, this represents your screen. When you are viewing your rendering on the screen, your eye serves as the camera. It sees the projected image and if it is positioned at the right point, it will see exactly what it would see if the actual 3D scene was there (neglecting effects like depth of field etc.)
Obviously, you cannot modify your screen (you could change the window size but let's stick with a constant-size image plane). Then, there is a direct relationship between the camera's position and the field of view. As the field of view increases, the camera moves closer and closer to the image plane. Like this:
Thus, if you are increasing your field of view in code, you need to move your eye closer to the screen to get the correct perception. You can actually try that with your image. Move your eye very close to the screen (I'm talking about 3cm). If you look at the outer spheres now, they actually look like real balls again.
In summary, the field of view should approximately match the geometry of the viewing setup. For a given screen size and average watch distance, this can be calculated easily. If your viewing setup does not match your assumptions in code, 3D perception will break down.
I'm trying to solve this one for hours and I can't figure out where I am going wrong..
On my page there is an image and a "selection frame". This frame can be moved and resized.
I am trying to make the image turn with the center point of the turn being the center of the frame.
I created a small handle at the top for rotation.
Here's the fiddle: http://jsfiddle.net/8PhqX/7/ (give it a minute to load)
The code in the fiddle is very long because I couldn't isolate the specific area relevant to my question. As you play around with it you'll see that the first rotation usually works fine, but then, things go crazy.
Here's the codeline for the rotation:
//selfRotator.handle.angle is the angle(clockwise) at which the rotation handle was rotated
//selfSelector.rotator.ox/oy is the position of the middle of the selection frame
//selfDefaults.imageArea.y is the y position of the section with the image (because of the red stripe in the top)
//selfImageArea.page.startX/Y is starting position of the image storing its position when the drag begins
//rotating by angle, at center point of selection
selfImageArea.page.transform(
['r', -selfRotator.handle.angle, selfSelector.rotator.ox - selfImageArea.page.startX, selfSelector.rotator.oy - (selfImageArea.page.startY - selfDefaults.imageArea.y)]
)
//tracking the image's start position and compensating
selfImageArea.page.attr({
transform: "...T" + (selfImageArea.page.startX) + "," + (selfImageArea.page.startY - selfDefaults.imageArea.y)
});
It looks like it gets messed up because of the getBBox values that don't follow the picture outlines.
I've added gridlines to illustrate the problem
also, iv'e came across this code(https://groups.google.com/forum/#!topic/raphaeljs/b8YG8DfI__g) for "getBBoxRotated()" function that should solve my issue but I can't seem to implement it.
Thanks for reading.
I'm working on a setup in Cocos2D 1.x where I have a huge CCLayerPanZoom in a scene with free panning and zooming.
Every frame, I have to additionally draw a CCRenderTexture on top to create "darkness" (I'm cutting out the light). That works well.
Now I've added single sprites to the surface, and they are managed by Box2D. That works as well. I can translate to the RenderTexture where the light sources ought to be, and they render fine.
And then I wanted to add a HUD layer on top, by adding a CCLayer to the scene. That layer needs to contain several sprites stacked on top of each other, as user interface elements.
Only, all of these elements fail to draw where I need them to be: Exactly in the center of screen. The Sprites added onto the HUD layer are all off, and I have iterated through pretty much every variation "convertToWorldSpace", "convertToNodeSpace", etc.
It is as if the constant scaling by the CCPanZoomLayer in the background throws off anchor points in the layer above each frame, and resetting them doesn't help. They all seem to default into one of the corners of the node bounding box they are attached to, as if their transform is blocked or set to zero when it comes to the drawing.
Has anyone run into this problem? Is this a known issue when using CCLayerPanZoom and drawing a custom CCRenderTexture on top each frame?
Ha! I found the culprit! There's a bug in Cocos2D' way of using Zwoptex data. (I'm using Cocos2D v 1.0.1).
It seems that when loading in Zwoptex v3 data, sprite frames' trim offset data is ignored when the actual sprite frame anchor point is computed. The effect is that no anchor point on a sprite with trim offset in its definition (eg in the plist) has its anchor point correctly set. Really strange... I wonder whether this has occurred to anybody else? It's a glaring issue.
Here's how to reproduce:
Create any data for a sprite frame in zwoptex v3 format (the one that uses the trim data). Make sure you actually have a trimmed sprite, i.e. offset must be larger than zero, and image size must be larger than source.
Load in sprite, and try to position it at center of screen. You'll see it's off. Here's how to compute your anchor point correctly:
CCSprite *floor = [CCSprite spriteWithSpriteFrameName:#"Menu_OmeFloor.png"]; //create a sprite
CCSpriteFrame *frame=[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"Menu_OmeFloor.png"]; //get its frame to access frame data
[floor setTextureRectInPixels:frame.rect rotated:frame.rotated untrimmedSize:frame.originalSizeInPixels]; //re-set its texture rect
//Ensure that the coordinates are right: Texture frame offset is not counted in when determining normal anchor point:
xa = 0.5 + (frame.offsetInPixels.x / frame.originalSizeInPixels.width);
ya = 0.5 + (frame.offsetInPixels.y / frame.originalSizeInPixels.height);
[floor setAnchorPoint:ccp(xa,ya)];
floor.position=(where you need it);
Replace the 0.5 in the xa/ya formula with your required anchor point values.