I’m creating a particle system using Qt and c++. I want to blend colours of the particles that overlap each other – adding the RGB values on top of each other, so colours would get brighter, something like this:
My code looks like this:
In custom QGraphicsScene class:
QPixmap* pixmap2 = new QPixmap("E:\\Qt_workspace\\1\\smoke5.png");
pixmap2->setDevicePixelRatio(0.5);
QPointF origin2 = {250, 100};
QPainter pix2(pixmap2);
pix2.setCompositionMode(QPainter::CompositionMode_Plus);
pix2.drawPixmap(origin2, *pixmap2);
pix2.setRenderHints(QPainter::Antialiasing | QPainter::SmoothPixmapTransform, true);
particleSystem2 = new ParticleSystem(this, pixmap2, origin2);
v_particleSystem.push_back(particleSystem2);
And i create particles in a loop:
void Level1::advance()
{
for (int i = 0; i < v_particleSystem.size(); ++i) {
v_particleSystem\[i\]->applyForce(gravity);
QVector2D v = { (float)player->x(), (float)player->y() };
repeller->update(v);
v_particleSystem[i]->applyReppeler(repeller);
v_particleSystem[i]->addParticle();
v_particleSystem[i]->run();
}
update(sceneRect); ///so items dont leave any artifacts though works without it when
using m_view->viewport()->repaint();
m_view->viewport()->repaint();
And Particle class derives from QGraphicsPixmapItem
But im getting this result:
Any idea how to achieve additive blending?
Related
As far as I've found out, cocos doesn't offer a simple filter handling like AS3 for example does.
My situation:
I want to add a realtime shadow to an cocos2d::Sprite.
For example I would like to do something like this (similar to AS3):
auto mySprite = Sprite::createWithSpriteFrameName("myCharacter.png");
DropShadowFilter* dropShadow = new DropShadowFilter();
dropShadow->distance = 0;
dropShadow->angle = 45;
dropShadow->color = 0x333333;
dropShadow->alpha = 1;
dropShadow->blurX = 10;
dropShadow->blurY = 10;
dropShadow->strength = 1;
dropShadow->quality = 15;
mySprite->addFilter(dropShadow);
This should add a shadow to my Sprite to achieve an result like this:
Adobe Drop Shadow Example
Could you help me please?
There isn't any built in support for shadows on Sprites in Cocos2D-X.
The best option, performance-wise, would be to place your shadows in your sprite images already, instead of calculating and drawing them in the code.
Another option is to sub-class Sprite and override the draw method so that you duplicate the sprite and apply your effects and draw it below the original.
One possible way to achieve that is with this snippet from this thread on the Cocos forum. I can't say that I completely follow what this code does with the GL transforms, but you can use this as a starting point to experiment.
void CMySprite::draw()
{
// is_shadow is true if this sprite is to be considered like a shadow sprite, false otherwise.#
if (is_shadow)
{
ccBlendFunc blend;
// Change the default blending factors to this one.
blend.src = GL_SRC_ALPHA;
blend.dst = GL_ONE;
setBlendFunc( blend );
// Change the blending equation to thi in order to subtract from the values already written in the frame buffer
// the ones of the sprite.
glBlendEquationOES(GL_FUNC_REVERSE_SUBTRACT_OES);
}
CCSprite::draw();
if (is_shadow)
{
// The default blending function of cocos2d-x is GL_FUNC_ADD.
glBlendEquationOES(GL_FUNC_ADD_OES);
}
}
I am trying to implement a program that draws basic shapes. All is going well except that new Shapes are being drawn below the current shape. My Paint function in the frames sourcecode file looks like this:
void ShapeFrame::OnPaint(wxPaintEvent& event)
{
wxPaintDC dc(this);
wxGraphicsContext *gc = wxGraphicsContext::Create(dc);
wxGraphicsMatrix trans = gc->CreateMatrix();
setTransform(&trans);
ShapeData *data = wxGetApp().getData();
for(int i=0; i<data->getNumShapes(); i++)
{
data->getShape(i)->draw(dc, &trans);
}
delete gc;
}
A snapshot of the app screen is also given.
I am a beginner in wxWidgets the circle was drawn first and then the rectangle
I found out that the setopacity function does not work for one of our cocos2d games, it is using cocos2d 1.0.1. Not matter what value I set, the opacity of all ccnodes are always 255, and the fadein/fadeout actions are not working either. We have another game which is using the same version of cocos2d but that one works perfectly. Does anyone have any clue about how to solve this problem?
CCNodes don't actually have a texture (image), so there is no opacity property for them. I am assuming you think that setting the opacity of a CCNode will affect its children, which it would not. opacity only affects the texture of the object that you are setting the opacity for. You can set the opacity of a CCSprite, because it has a texture, but doing so would not affect that CCSprite's children. You would have to loop through all of the children, and set the opacity for each if you wanted to affect the opacity of more than one CCSprite.
Basic DrawNode can't handle opacity by itself either (this feature is in the plan for cocos2d-4.*).
You can inherite your class from Node or DrawNode and implement setOpacity like this:
void AlphaNode::setOpacity(GLubyte opac) {
mOpacity = opac;
if (_bufferCount) {
for (int i = 0; i < _bufferCount; i++) {
_buffer[i].colors.a = mOpacity;
}
}
if (_bufferCountGLPoint) {
for (int i = 0; i < _bufferCountGLPoint; i++) {
_bufferGLPoint[i].colors.a = mOpacity;
}
}
if (_bufferCountGLLine) {
for (int i = 0; i < _bufferCountGLLine; i++) {
_bufferGLLine[i].colors.a = mOpacity;
}
_dirtyGLLine = true;
}
_dirty = true;
}
I think you can do something like this for Node.
So I am trying to create a mini-map/PIP. I have an existing program with scene that runs inside a Qt Widget. I have a class, NetworkViewer, which extends CompositeViewer. In NetworkViewer's constructor I call the following function. Notice the root is the scene which is populated elsewhere.
void NetworkViewer::init() {
root = new osg::Group() ;
viewer = new osgViewer::View( );
viewer->setSceneData( root ) ;
osg::Camera* camera ;
camera = createCamera(0,0,100,100) ;
viewer->setCamera( camera );
viewer->addEventHandler( new NetworkGUIHandler( (GUI*)view ) ) ;
viewer->setCameraManipulator(new osgGA::TrackballManipulator) ;
viewer->getCamera()->setClearColor(
osg::Vec4( LIGHT_CLOUD_BLUE_F,0.0f));
addView( viewer );
osgQt::GraphicsWindowQt* gw =
dynamic_cast( camera->getGraphicsContext() );
QWidget* widget = gw ? gw->getGLWidget() : NULL;
QGridLayout* grid = new QGridLayout( ) ;
grid->addWidget( widget );
grid->setSpacing(1);
grid->setMargin(1);
setLayout( grid );
initHUD( ) ;
}
The create camera function is as follows:
osg::Camera* createCamera( int x, int y, int w, int h ) {
osg::DisplaySettings* ds = osg::DisplaySettings::instance().get();
osg::ref_ptr traits
= new osg::GraphicsContext::Traits;
traits->windowName = "" ;
traits->windowDecoration = false ;
traits->x = x;
traits->y = y;
traits->width = w;
traits->height = h;
traits->doubleBuffer = true;
traits->alpha = ds->getMinimumNumAlphaBits();
traits->stencil = ds->getMinimumNumStencilBits();
traits->sampleBuffers = ds->getMultiSamples();
traits->samples = ds->getNumMultiSamples();
osg::ref_ptr camera = new osg::Camera;
camera->setGraphicsContext( new osgQt::GraphicsWindowQt(traits.get()) );
camera->setViewport( new osg::Viewport(0, 0, traits->width, traits->height) );
camera->setViewMatrix(osg::Matrix::translate(-10.0f,-10.0f,-30.0f));
camera->setProjectionMatrixAsPerspective(
20.0f,
static_cast(traits->width)/static_cast(traits->height),
1.0f, 10000.0f );
return camera.release();
}
I have been looking at several camera examples and searching for a solution for a while to no avail. What I am really looking for is the background being my main camera which takes up most of the screen and displays the scene graph while my mini-map appears in the bottom right. It has the same scene as the main camera but is overlaid on top of it and has its own set of controls for selection etc since it will have different functionality.
I was thinking that perhaps by adding another camera as a slave I would be able to do this:
camera = createCamera(40,40,50,50) ;
viewer->addSlave(camera) ;
But this doesn't seem to work. If I disable the other camera I do see a clear area that it appears this camera was suppose to be rendering in (its viewport) but that doesn't help. I've played around with rendering order thinking it could be that to no avail.
Any ideas? What it the best way to do such a minimap is? What am I doing wrong? Also anyway to make the rendering of the minimap circular instead of rectangular?
I am not personnally using OpenSceneGraph, so I can't advise you on your code. I think the best is to ask in the official forums. But I have some ideas on the minimap:
First, you have to specify the basic features of your minimap. Do you want it to emphasize some elements of the scene, or just show the scene (ie, RTS-like minimap vs simple top-show of the scene) ?
I assume you do not want to emphasize some elements of the scene. I also assume your main camera is not really 'minimap-friendly'. So, the simplest is to create a camera with the following properties:
direction = (0,-1,0) (Y is the vertical axis)
mode = orthographic
position = controlled by what you want, for example your main camera
Now, for the integration of the image. You can:
set the viewport of the minimap camera to what you want, if your minimap is rectangular.
render the minimap to a texture (RTT) and then blend it through an extra rendering pass.
You can also search other engines forums (like Irrlicht and Ogre) to see how they're doing minimaps.
I'm a openFrameworks newbie. I am learning basic 2d drawing which is all great so far. I have drawn a circle using:
ofSetColor(0x333333);
ofFill;
ofCircle(100,650,50);
My question is how do I give the circle a variable name so that I can manipulate in the mousepressed method? I tried adding a name before the ofCircle
theball.ofSetColor(0x333333);
theball.ofFill;
theball.ofCircle(100,650,50);
but get I 'theball' was not declared in this scope error.
As razong pointed out that's not how OF works. OF (to the best of my knowledge) provides a handy wrapper to a lot of OpenGL stuff. So you should use OF calls to effect the current drawing context (as opposed to thinking of a canvas with sprite objects or whatever). I usually integrate that kind of thing into my objects. So lets say you have a class like this...
class TheBall {
protected:
ofColor col;
ofPoint pos;
public:
// Pass a color and position when we create ball
TheBall(ofColor ballColor, ofPoint ballPosition) {
col = ballColor;
pos = ballPosition;
}
// Destructor
~TheBall();
// Make our ball move across the screen a little when we call update
void update() {
pos.x++;
pos.y++;
}
// Draw stuff
void draw(float alpha) {
ofEnableAlphaBlending(); // We activate the OpenGL blending with the OF call
ofFill(); //
ofSetColor(col, alpha); // Set color to the balls color field
ofCircle(pos.x, pos.y, 5); // Draw command
ofDisableAlphaBlending(); // Disable the blending again
}
};
Ok cool, I hope that makes sense. Now with this structure you can do something like the following
testApp::setup() {
ofColor color;
ofPoint pos;
color.set(255, 0, 255); // A bright gross purple
pos.x, pos.y = 50;
aBall = new TheBall(color, pos);
}
testApp::update() {
aBall->update()
}
testApp::draw() {
float alpha = sin(ofGetElapsedTime())*255; // This will be a fun flashing effect
aBall->draw(alpha)
}
Happy programming.
Happy designing.
You can't do it that way. ofCircle is a global drawing method and draws just a circle.
You can declare a variable (or better three int for rgb - since you can't use ofColor as an argument for ofSetColor) that store the color for the circle and modify it in the mousepressed method.
Inside the draw method use your variables for ofSetColor before rendering the circle.