How are textures referenced in shaders? - opengl

Quick OpenGL newbie question. I have this line in my fragment shader:
uniform sampler2D mytexture;
I grepped into the sample code I'm using and I couldn't find any reference to mytexture; only found calls which activate a texture unit and bind a texture to it that was previously copied to the GPU.
My question is, how does the fragment shader know that the only texture I'm using has to be referred via mytexture?
I would have thought a
glBindTextureToUniform(texture_id, "mytexture");
or similar had to be called.

The mapping is between the texture unit and the uniform but not the texture, as concept3d mentions. The reason for this discrepancy is multi-texturing.
Texturing units is the machinery in OpenGL to facilitate multi-texturing. When you want multiple textures to be looked-up in your shaders, you bind each to a different texture unit (actually you bind it to a target within the unit) and have different uniforms for each to refer them to in a shader. Each texturing unit has multiple targets like 1D, 2D, 3D, etc.
The texture looked-up in the shader, through the uniform, refers to the unit. Its type (e.g. sampler2D) decides the target in the unit that is used. Here is how the mapping is
uniform sampler2D mytexture;
| |
| |
| +--->GL_TEXTURE0 / UNIT 0 GL_TEXTURE1 / UNIT 1 GL_TEXTURE2 / UNIT 2
| +--------------------+ +--------------------+ +--------------------+
| | +------------+ | | +------------+ | | +------------+ |
| | | TEXTURE_1D | | | | TEXTURE_1D | | | | TEXTURE_1D | |
| | +------------+ | | +------------+ | | +------------+ |
| | +------------+ | | +------------+ | | +------------+ |
+---------------->| TEXTURE_2D | | | | TEXTURE_2D | | | | TEXTURE_2D | |
| +------------+ | | +------------+ | | +------------+ |
| +------------+ | | +------------+ | | +------------+ | …
| | TEXTURE_3D | | | | TEXTURE_3D | | | | TEXTURE_3D | |
| +------------+ | | +------------+ | | +------------+ |
| +------------+ | | +------------+ | | +------------+ |
| |TEX_1D_ARRAY| | | |TEX_1D_ARRAY| | | |TEX_1D_ARRAY| |
| +------------+ | | +------------+ | | +------------+ |
| . | | . | | . |
| . | | . | | . |
| . | | . | | . |
+--------------------+ +--------------------+ +--------------------+
It's to these targets (the inner boxes) to which you upload the texture data from an image or a table, etc. The active unit, if not set explicitly, is by default GL_TEXTURE0.
how does the fragment shader know that the only texture I'm using has to be referred via mytexture in the fragment shader?
It's better to set the uniform with the unit ID. If you did not and it still works, then you are depending on the forgiving behaviour of your graphics driver. It may not work on all devices or even a future version of the driver. We will have to look up the spec. to know if this is allowed.

OpenGL has the concept texture units, those are fixed starting from 0 to MAX_TEXTURE_UNIT.
First you select the active texture unit, using glActiveTexture(GL_TEXTURE0)
Then you bind the texture to the active texture unit.using glBindTexture(GL_TEXTURE_2D, tex);
And the finally you set the shader uniform value to reference the texture unit not the the texture id, it uses whatever texture was bound to the unit.
Think of the texture unit as a container, the uniform variable use whatever is in the texture unit.
In other words,
You bind a texture Id to a texture unit. not glBindTextureToUniform directly.
You set the uniform variable to the value of the texture unit and it will reference whatever texture was bound to it.

Related

Is it possible to use AWS KPL on Lambda connected to API Gateway

I am trying to build a data collection pipe-line on top of AWS services. Overal architecture is given below;
In summary system should get events from API gateway (1) ( one request for each event ) and the data should be written to Kinesis (2).
I am expecting ~100k events per second. My question is related to KPL usage on Lambda functions. On step 2 I am planning to write a Lambda method with KPL to write events on Kinesis with high throughput. But I am not sure it is possible as API Gateway calls lambda function for each event separately.
Is it possible/reasonable to use KPL in such architecture or I should using Kinesis Put API instead?
1 2 3 4
+----------------+ +----------------+ +----------------+ +----------------+
| | | | | | | |
| | | | | | | |
| AWS API GW +-----------> | AWS Lambda +-----------> | AWS Kinesis +----------> | AWS Lambda |
| | | Function with | | Streams | | |
| | | KPL | | | | |
| | | | | | | |
+----------------+ +----------------+ +----------------+ +-----+-----+----+
| |
| |
| |
| |
| |
5 | | 6
+----------------+ | | +----------------+
| | | | | |
| | | | | |
| AWS S3 <-------+ +----> | AWS Redshift |
| | | |
| | | |
| | | |
+----------------+ +----------------+
I am also thinking about writing directly to S3 instead of calling lambda function from api-gw. If first architecture is not reasonable this may be a solution but in that case I will have a delay till writing data to kinesis
1 2 3 4 5
+----------------+ +----------------+ +----------------+ +----------------+ +----------------+
| | | | | | | | | |
| | | | | | | | | |
| AWS API GW +-----------> | AWS Lambda +------> | AWS Lambda +-----------> | AWS Kinesis +----------> | AWS Lambda |
| | | to write data | | Function with | | Streams | | |
| | | to S3 | | KPL | | | | |
| | | | | | | | | |
+----------------+ +----------------+ +----------------+ +----------------+ +-----+-----+----+
| |
| |
| |
| |
| |
6 | | 7
+----------------+ | | +----------------+
| | | | | |
| | | | | |
I do not think using KPL is the right choice here. The key concept of KPL is, that records get collected at the client and then send as a batch operation to Kinesis. Since Lambdas are stateless per invocation, it would be rather difficult to store the records for aggregation (before sending it to Kinesis).
I think you should have a look at the following AWS article which explain how you can directly connect API-Gateway to Kinesis. This way, you can avoid the extra Lambda which just forwards your request.
Create an API Gateway API as an Kinesis Proxy
Obviously, if your data coming through AWS API Gateway corresponds to one Kinesis Data Streams record it makes no sense to use the KPL as pointed out by Jens. In this case you can make direct call of Kinesis API without using Lambda. Eventually, you may use some additional processing in Lambda and send the data through PutRecord (not PutRecords used by KPL). Your code in JAVA will looks like this
AmazonKinesisClientBuilder clientBuilder = AmazonKinesisClientBuilder.standard();
clientBuilder.setRegion(REGION);
clientBuilder.setCredentials(new DefaultAWSCredentialsProviderChain());
clientBuilder.setClientConfiguration(new ClientConfiguration());
AmazonKinesis kinesisClient = clientBuilder.build();
...
//then later on each record
PutRecordRequest putRecordRequest = new PutRecordRequest();
putRecordRequest.setStreamName(STREAM_NAME);
putRecordRequest.setData(data);
putRecordRequest.setPartitionKey(daasEvent.getAnonymizedId());
putRecordRequest.setExplicitHashKey(Utils.randomExplicitHashKey());
putRecordRequest.setSequenceNumberForOrdering(sequenceNumberOfPreviousRecord);
PutRecordResult putRecordResult = kinesisClient.putRecord(putRecordRequest);
sequenceNumberOfPreviousRecord = putRecordResult.getSequenceNumber();
However, there may be cases when using KPL from lambda makes sense. For example the data sent to AWS API Gateway contains multiple individual records which will be sent to one or multiple streams. In that cases the benefits (see https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html) of KPL are still valid, but you have to be aware of specifics given by using of Lambda concretely an "issue" pointed out here https://github.com/awslabs/amazon-kinesis-producer/issues/143 and use
kinesisProducer.flushSync()
at the end of insertions which worked also for me.

Gnome Shell causes QMenu to display in incorrect position when using multiple screens

My Qt app's context menu is displayed in the incorrect position when using multiple monitors on Gnome 3.
It would seem that perhaps the culprit here is Gnome Shell, rather than Qt itself, as I can't replicate the issue described below with Ubuntu Unity, it only happens when running Ubuntu Gnome 14.04.
Symptoms:
I am using QWidget::mapToGlobal(QPoint) from a QWidget::customContextMenuRequested signal in order to find the correct position to display a context menu.
I am then using QMenu::exec(QPoint) to display the menu in that position
void Window::slotOnShowContextMenu(const QPoint& p)
{
_menu->exec(_tree->viewport()->mapToGlobal(p));
}
My problem is that I have the following screen layout:
When my window is on the right hand screen, or on the left hand screen but at a position below the top of the right hand screen, the context menu is shown correctly:
When my window is on the left hand screen, at a level above the top of the right hand screen, even though the Y value of the QPoint returned from mapToGlobal is correct, the context menu is not displayed at that point, but is rather constrained to be at the same level as the right hand screen.
I have confirmed that _tree->viewport()->mapToGlobal(p) returns the correct results (just by logging the resulting QPoint)
QPoint point = _tree->viewport()->mapToGlobal(p);
std::cout << point.x() << ":" << point.y() << '\n';
It would therefore seem that QMenu::exec(QPoint) is the culprit?
How can I correctly display my context menu?
Edit:
I tried running the same app on standard Ubuntu 14.04 (ie: using Unity instead of Gnome), and the incorrect behaviour doesn't present itself, so this would seem to be a Gnome 3 issue?
I have tried changing my primary monitor so that the portrait monitor on the left is primary, and the context menu displays correctly.
Note black launch bar is on the left screen
When using the following layout (primary screen is landscape on the right) the context menu position is constrained to be the top of the primary monitor.
Note black launch bar is on the right screen
So it would seem that the primary monitor's top position is the maximum height that Qt will display it's context menu?
While the technical details may be slightly different, I found with elementary os and two monitors (a laptop and a smart tv), with my built-in display positioned on the right of the external display, the menu is invisible if I have the Qt app (KeePassXC in my case) on my built-in display. If I move it to my external display, the menus are at the top of the display, instead of in the window. There is clearly a bug somewhere, whether in Qt or ubuntu or gnome shell, I can't say.
I can say that switching my displays back to the default positioning, with the built-in on the left of external, the issue is resolved: I can finally access the menus where they should be.
I have tried moving the window location around, and as long as I have my built-in display situated to the left of my external display, it doesn't matter where the window is, it just works like it should.
This solution may not be ideal if your monitors are not physically positioned that way, but in my case, I'm already having to lie about where my monitors are positioned. My built-in is below my external, but when I try to tell elementary that, it stops functioning correctly (all windows are moved to outside the bounds of the screen, all windows launch off screen; deleting .config/monitors.xml fixes it, but I have to be able to get to a command prompt on the screen to do it.)
I ran into this same issue. As I dug into it I realized that it's a conflict between the perceived/"logical" monitor location and the rendering of the screen. In my case, I can see through the output of xrandr and the configuration of ~/.config/monitors.xml that my right most monitor (the equivalent of your Hewlett Packard 27" monitor) that it has a position offset of +360:
$ xrandr
Screen 0: minimum 320 x 200, current 7040 x 1440, maximum 16384 x 16384
eDP-1 connected primary 1920x1080+5120+360 (normal left inverted right x axis y axis) 309mm x 173mm
...
These 360 pixels correspond to the top of the window's "location" as understood by QT.
In my case the menu bar itself is 25 pixels high. Keeping all this in mind(360 + height of title bar + height of menu) actual offset of where perceived drawing of the menu.
+---------+---------------------------------------------+ ^
| | | |
| +-------------------------------------^-------------+ | |
| | | | 25 pixels | | | 360 pixels
| +-------------------------------------v-------------+ | |
| | | | | v
| | | 385 pixels | +---+---------------------------+
| | | | | |
| | +----v+ | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | +-----+ | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| +---------------------------------------------------+ | |
| | |
| | |
| | |
| | |
+-------------------------------------------------------+-------------------------------+
When I re-aligned me screens so that the position offset was zero like in the following setup the problem went away:
$ xrandr
Screen 0: minimum 320 x 200, current 7040 x 1440, maximum 16384 x 16384
eDP-1 connected primary 1920x1080+5120+0 (normal left inverted right x axis y axis) 309mm x 173mm
...
In this case the 360 pixel position offset is now zero and QT renders the menu drop down at the correct location:
+-------------------------------------------------------+-------------------------------+
| | |
| +-------------------------------------^-------------+ | |
| | | 25 pixels | | |
| +--+-----+----------------------------v-------------+ | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | +-----+ | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | +-------------------------------+
| +---------------------------------------------------+ |
| |
| |
| |
| |
+--------------------------------------------------------
I was in the process of filing a bug with QT about this (as I have multiple applications which are affected by this bug), but in the process of collecting the relevant information for the bugs I discovered that it's not just QT/QT5 affected but also Blender. As Blender does not use a graphics framework (e.g. QT/QT5, GTK+, etc) this is almost certainly a bug in one of the components of GNOME3.
I'm running into this with qt applications in fvwm if:
I have a monitor configuration with deadspace
and if qt menu animations are enabled
The menus end up off-screen and not visible at all or in completely the wrong place (depending on the monitor offsets) -
Thus I'm not so sure this is just a gnome shell bug.

how to classify the whole data set in weka

I've got a supervised data set with 6836 instances, and I need to know the predictions of my model for all the instances, not only for a test set.
I followed the approach train-test (2/3-1/3) to know about my rates TPR and FPR, and I've got the predictions about my test (1/3), but I need to know the predcitions about all the 6836 instances.
How can I do it?
Thanks!
In the classify tab in Weka Explorer there should be a button that says 'More options...' if you go in there you should be able to output predictions as plain text. If you use cross validation rather than a percentage split you will get predictions for all instances in a table like this:
+-------+--------+-----------+-------+------------+
| inst# | actual | predicted | error | prediction |
+-------+--------+-----------+-------+------------+
| 1 | 2:no | 1:yes | + | 0.926 |
| 2 | 1:yes | 1:yes | | 0.825 |
| 1 | 2:no | 1:yes | + | 0.636 |
| 2 | 1:yes | 1:yes | | 0.808 |
| ... | ... | ... | ... | ... |
+-------+--------+-----------+-------+------------+
If you don't want to do cross validation you also can create a data set containing all your data (training + test) and add it as test data. Then you can go to more options and show the results as Campino already answered.

Omnigraffle templates: Avoid scaling of some parts of grouped shapes

I would like to create my own templates to mockup my web applications. Now imagine I had a Panel shape. It consist of a title and a body devided by a line. When I scale the template in its height later on I would not like to see the title bar scaled.
First of all, is that possible? And if so, how?
My panel should look something like that:
-------------------
| Title | -> should not change its height on scaling
-------------------
| Some Text in |
| here.. |
| |
| |
| | |
| ˇ |
| Only the body |
| should scale... |
| |
-------------------
I am using Omnigraffle 5.4.

WTL CSplitterWindow cannot create more than 3 instances?

I'm using WTL to create a window containing many splitted panes. The following will be the result.
---------------------------
| | | |
| | | |
| | | |
| |--------------------
| | | |
| | | |
---------------------------
There will be 4 splitters, three vertical ones and a horizontal one.
I followed the great article : http://www.codeproject.com/KB/wtl/wtl4mfc7.aspx.
But I only can add 3 splitters as below.
---------------------------
| | | |
| | | |
| | | |
| |--------------------
| | |
| | |
---------------------------
I tried a lot of ways but still cannot add the last one.
Is is a bug of WTL? Can anybody help me?
Best regards,
Zach#Shine
What is your problem? Is it a compile error, a runtime ASSERT, something else?
I strongly suggest that you derive your CMainFrame from CSplitFrameWindowImpl<>.
---------------------------
| | | |
| | 2TL | 2TR |
| 1L | | |
| |--------------------
| | 2BL | 2BR |
| | | |
---------------------------
The right pane (including all '2' panes) should derive from CSplitterWindowImpl<CPane2, false>, the right top pane (including all '2T' panes) should derive from CSplitterWindowImpl<CPane2T, true> as well as the right bottom one.
Each split pane should be created in the OnCreate() handler of it's parent and create it's children in it's own OnCreate().