cocos2d-x Run parent action from child - c++

I try to write a simple game using cocos2d-x library.
I created class (named Letter) to spawn sprite with random letter as a label and add a listener because I want to catch touch events.
I've a function:
listener->onTouchEnded = [=](cocos2d::Touch* touch, cocos2d::Event* event)
{
CCLOG("press");
Letter::touchEvent(touch, event);
};
and action:
void Letter::touchEvent(cocos2d::Touch* touch, cocos2d::Event* event)
{
this->removeFromParentAndCleanup(true);
CCLOG("touched MySprite");
}
In my Layer I have a function to spawn instance of Letter class:
{
CCLOG("new letter");
Letter* _letter = Letter::create();
addChild(_letter, 1);
}
And of course in init() i create a one letter:
this->createLetter();
Now, I want to create action which runs after touch to send some information (int) to my Layer, destroy Sprite and run createLetter(); again.
How can I do this? I tried create CC_CALLBACK_1 and something but I don't have idea what I need to do. :(
I'm not C++ master but I think I've basic knowledge about C++, I'm a system administrator, but I would like to try something net.
Thank you for help.

User this->getParent() in Latter class to access Layer class from and then call any method written there incuding createLetter() or any new method to pass integer.
#
YourLayerClass* layerObject = (YourLayerClass*)this->getParent();
layerObject->sendData(3);
layerObject->createLetter();
this->removeFromParentAndCleanup(true);

Related

How do I open a window depending on the user's choice in the previous window? in Qt

Okay so basically I am creating an interface that has a seller and a customer, and so the user chooses which option he wants and is taken to the the "registration window". After filling the questions he presses continue and it takes him to either the "seller" interface or "customer" interface.
I want to know how on Qt do I make the
void RegistrationWindow::on_continue_pushButton_clicked()
take the user to the interface of the chosen option in the previous window.
I know I will need to use an If statement, but I don't know what will the condition be or how i will make it work.
Here is the code for the registration window continue button:
void RegistrationWindow::on_continue_pushButton_clicked()
{
hide();
if ()
customer = new CustomerWindow(this);
customer->show();
if()
seller = new SellerWindow(this);
seller->show();
}
And here is the code for the main window buttons:
void MainWindow::on_customer_pushButton_clicked()
{
hide();
registration = new RegistrationWindow(this);
registration->show();
}
void MainWindow::on_seller_pushButton_clicked()
{
hide();
registration = new RegistrationWindow(this);
registration->show();
}
If any information else is needed I will provide it.
Thank you!
I have not tried anything yet because I dont understand

Qt5 Not Registering Touch Events

I'm working on determining if a certain touchscreen will be compatible with an application and recently got a loaner model of an Elo 2402L touchscreen. I've installed the driver the company provides and was able to see multi-touch events using the evtest utility (parser for /dev/input/eventX).
The thing is that I'm running Scientific Linux 6.4, which uses Linux kernel 2.6.32. I've seen a lot of mixed information on touchscreen compatibility for Linux kernels before 3.x.x. Elo says that their driver only supports single-touch for 2.6.32. Also, I've seen people say that the majority of the compatibility issues with touch events in this kernel version are with Xorg interfaces.
I developed a very simple Qt5 application to test whether Qt could detect the touch events or not, because I'm not sure whether Qt applications are X-based and if they read events directly from /dev/input or something else.
However, despite a simple mouse event handler being able to correctly register mouse events, I also created a simple touch event handler and nothing happens when I touch the main screen. There is a beep, as part of the driver that Elo provides makes a beep when the screen is touched, so I know that SOMETHING is registering that touch, but neither the desktop, nor this application seem to recognize the touch event.
Also, yes, the WA_AcceptTouchEvents attribute is set to true in the window's constructor.
I have a simple mainwindow.h:
...
protected:
int touchEvent(QTouchEvent *ev);
...
And mainwindow.cpp:
MainWindow::MainWindow(QWidget *parent) {
...
setAttribute(Qt::WA_AcceptTouchEvents, true);
touchPoints = 0;
}
...
int MainWindow::touchEvent(QTouchEvent *ev) {
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints--;
break;
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
Is there something wrong with the way I'm using the touch event handler? Or is there some issue with the device itself? Does Qt read input events directly from /dev/input, or does it get its input events from X?
Very confused here, as I haven't used Qt before and want to narrow down the cause before I say that it's the device causing the issue.
Also, if anyone has any insight into the device / kernel compatibility issue, that would be extremely helpful.
The QTouchEvent documentation says:
Touch events occur when pressing, releasing, or moving one or more
touch points on a touch device (such as a touch-screen or track-pad).
To receive touch events, widgets have to have the
Qt::WA_AcceptTouchEvents attribute set and graphics items need to have
the acceptTouchEvents attribute set to true.
Probably you just need to call setAttribute(Qt::WA_AcceptTouchEvents, true) inside the MainWindow constructor.
Is there something wrong with the way I'm using the touch event handler?
There is no touch event handler. If you change:
int touchEvent(QTouchEvent *ev);
to:
int touchEvent(QTouchEvent *ev) override;
(which you should always do when you are trying to override virtual functions so you can catch exactly this kind of mistake), you'll see that there is no such function for you to override. What you need to override is the event() handler:
protected:
bool event(QEvent *ev) override;
You need to check for touch events there:
bool MainWindow::event(QEvent *ev)
{
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints++;
break;
default:
return QMainWindow(ev);
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
However, it might be better to work with gestures instead of touch events. But I don't know what kind of application you're writing. If you wanted to let Qt recognize gestures rather than implementing them yourself through touch events, you would first grab the gestures you want, in this case pinching:
setAttribute(Qt::WA_AcceptTouchEvents);
grabGesture(Qt::PinchGesture);
and then handle it:
bool MainWindow::event(QEvent *ev)
{
if (e->type() != QEvent::Gesture) {
return QMainWindow::event(e);
}
auto* gestEv = static_cast<QGestureEvent*>(e);
if (auto* gest = gestEv->gesture(Qt::PinchGesture)) {
auto* pinchGest = static_cast<QPinchGesture*>(gest);
auto sf = pinchGest->scaleFactor();
// You could use the pinch scale factor here to zoom an image
// for example.
e->accept();
return true;
}
return QMainWindow::event(e);
}
Working with gestures instead of touch events has the advantage of using the platform's gesture recognition facilities, like those of Android and iOS. But again, I don't know what kind of application you're writing and on what kind of platform you're working on.

Detect bb10 application goes to background

First this is my first development using bb10 sdk and also with qml + c++, I had
I'm trying to capture the moment when the user slids from the blackberry logo, to minimize or switch app. Acording to their official documentation http://developer.blackberry.com/native/documentation/core/com.qnx.doc.native_sdk.devguide/com.qnx.doc.native_sdk.devguide/topic/c_appfund_applifecycle.html
There is a state windows NAVIGATOR_WINDOW_INACTIVE that comes when the invisible() method is called,
the thing here: is that the documentation and searches I've done on internet, doesn't explain anything about were to override a method that listens for this event.
Any help would be greatly appreciated.
You need to create a subclass of QObject. If you use the project creation wizard Momentics will do this for you as applicationui.hpp and applicationui.cpp. In this class declare the following slots in application.hpp:
public slots:
void asleep();
void awake();
void invisible();
void thumbnail();
void fullscreen();
Then in the class creation function attach the Application signals to your slots:
bool c = QObject::connect(Application::instance(), SIGNAL(asleep()),
this, SLOT(asleep()));
Q_ASSERT(c);
c = QObject::connect(Application::instance(), SIGNAL(awake()),
this, SLOT(awake()));
Q_ASSERT(c);
c = QObject::connect(Application::instance(),
SIGNAL(invisible()), this, SLOT(invisible()));
Q_ASSERT(c);
c = QObject::connect(Application::instance(),
SIGNAL(thumbnail()), this, SLOT(thumbnail()));
Q_ASSERT(c);
c = QObject::connect(Application::instance(),
SIGNAL(fullscreen()), this, SLOT(fullscreen()));
Q_ASSERT(c);
Q_UNUSED(c);
Then define the slot functions to perform what you need to do when the application state changes into the one corresponding to the signal (I've only included one here):
void applicationui::asleep() {
//configure application for sleep mode. Suspend or reduce processing, etc.
}

Blackberry App start Webservice Null

I have a version hit check right after my application gets start.
But when simulator is loading its sending the request and all that stuff. After my application starts it give version hit value NULL but after I close the application and open it again it gives the correct value.
1) My Question is that Why is this behavior occurring and what should I do that app starts and version check gives correct value at first attempt!
2) And the app is even not executed by user why its line of codes are executed?????
public MyScreen() {
Bitmap bitmap = Bitmap.getBitmapResource("background.png");
this.getMainManager().setBackground(
BackgroundFactory.createBitmapBackground(bitmap));
synchronized (Application.getEventLock())
{
UiApplication.getUiApplication().invokeLater(new Runnable()
{
public void run()
{
Status.show("Please Wait...", Bitmap.getPredefinedBitmap(Bitmap.INFORMATION), 1000);
LoginScreen();
}
});
}
Now what does it do is that it shows only the background screen and nothing happens no service but when I start it again it works. Whats the problem? Thanks
If your MyScreen class is actually a kind of Screen (through inheritance), then there's no need for you to synchronize on the event lock in this case. The constructor for a Screen will already be called on the UI thread, so, just simplify your code to:
public MyScreen() {
Bitmap bitmap = Bitmap.getBitmapResource("background.png");
this.getMainManager().setBackground(
BackgroundFactory.createBitmapBackground(bitmap));
UiApplication.getUiApplication().invokeLater(new Runnable()
{
public void run()
{
Status.show("Please Wait...", Bitmap.getPredefinedBitmap(Bitmap.INFORMATION), 1000);
LoginScreen();
}
});
Also, you might be able to get rid of the invokeLater() call, too, leaving you with this:
public MyScreen() {
Bitmap bitmap = Bitmap.getBitmapResource("background.png");
this.getMainManager().setBackground(
BackgroundFactory.createBitmapBackground(bitmap));
Status.show("Please Wait...", Bitmap.getPredefinedBitmap(Bitmap.INFORMATION), 1000);
LoginScreen();
You would normally use invokeLater() if you just wanted to safely initiate the code inside its run() method from a background thread, or if you wanted to queue it to be run after the constructor finishes.
But, if you're ready for it to happen right away, and you were just using that call to ensure that
Status.show("Please Wait...", Bitmap.getPredefinedBitmap(Bitmap.INFORMATION), 1000);
LoginScreen();
was run on the UI thread, then there's no need for that, because as I said, you're already on the UI thread in the MyScreen constructor.
But, I also can't see what you do at the end of your MyScreen constructor, so it's possible that using invokeLater() is appropriate.
Post some more information in response to my comment above, and I'll try to help with more.

Common Design for Console and GUI

I am designing a little game for my own fun's and training's sake. The real identity of the game being quite irrelevant for my actual question, suppose it's the Mastermind game (which it actually is :)
My real goal here is to have an interface IPlayer which will be used for any player: computer or human, console or gui, local or network. I am also intending to have a GameController, which will deal with just two IPlayers.
the IPlayer interface would look something like this:
class IPlayer
{
public:
//dtor
virtual ~IPlayer()
{
}
//call this function before the game starts. In subclasses,
//the overriders can, for example, generate and store the combination.
virtual void PrepareForNewGame() = 0;
//make the current guess
virtual Combination MakeAGuess() = 0;
//return false if lie is detected.
virtual bool ProcessResult(Combination const &, Result const &) = 0;
//Answer to opponent's guess
virtual Result AnswerToOpponentsGuess(Combination const&) = 0;
};
The GameController class would do something like this:
IPlayer* pPlayer1 = PlayerFactory::CreateHumanPlayer();
IPlayer* pPlayer1 = PlayerFactory::CreateCPUPlayer();
pPlayer1->PrepareForNewGame();
pPlayer2->PrepareForNewGame();
while(no_winner)
{
Guess g = pPlayer1->MakeAguess();
Result r = pPlayer2->AnswerToOpponentsGuess(g);
bool player2HasLied = ! pPlayer1->ProcessResult(g, r);
etc.
etc.
}
By this design, I am willing to make GameController class immutable, that is, I stuff the just game rules in it, and nothing else, so since the game itself is established, this class shouldn't change. For a console game this design would work perfectly. I would have HumanPlayer, which in its MakeAGuess method would read a Combination from the standard input, and a CPUPlayer, which would somehow randomly generate it etc.
Now here's my problem: The IPlayer interface, along with the GameController class, are synchronous in their nature. I can't imagine how I would implement the GUI variant of the game with the same GameController when the MakeAGuess method of GUIHumanPlayer would have to wait for, for example, some mouse movements and clicks. Of course, I could launch a new thread which would wait for user input, while the main thread would block, so as to imitate synchronous IO, but somehow this idea disgusts me. Or, alternatively, I could design both the controller and player to be asynchronous. In this case, for a console game, I would have to imitate asynchronousness, which seems easier than the first version.
Would you kindly comment on my design and my concerns about choosing synchronous or asynchronous design? Also, I am feeling that I put more responsibility on the player class than GameController class. Etc, etc.
Thank you very much in advance.
P.S. I don't like the title of my question. Feel free to edit it :)
Instead of using return values of the various IPlayer methods, consider introducing an observer class for IPlayer objects, like this:
class IPlayerObserver
{
public:
virtual ~IPlayerObserver() { }
virtual void guessMade( Combination c ) = 0;
// ...
};
class IPlayer
{
public:
virtual ~IPlayer() { }
virtual void setObserver( IPlayerObserver *observer ) = 0;
// ...
};
The methods of IPlayer should then call the appropriate methods of an installed IPlayerObserver instead of returning a value, as in:
void HumanPlayer::makeAGuess() {
// get input from human
Combination c;
c = ...;
m_observer->guessMade( c );
}
Your GameController class could then implement IPlayerObserver so that it gets notified whenever a player did something interesting, like - making a guess.
With this design, it's perfectly fine if all the IPlayer methods are asynchronous. In fact, it's to be expected - they all return void!. Your game controller calls makeAGuess on the active player (this might compute the result immediately, or it might do some network IO for multiplayer games, or it would wait for the GUI to do something) and whenever the player did his choice, the game controller can rest assured that the guessMade method will be called. Furthemore, the player objects still don't know anything about the game controller. They are just dealing with an opaque 'IPlayerObserver' interface.
The only thing making this different for the GUI as compared to the console is that your GUI is event driven. Those events take place on the GUI thread, and therefore, if you host the Game code on the GUI thread, you have a problem: Your call to have the player make a move blocks the GUI thread, and this means you can't get any events until that call returns. [EDIT: Inserted the following sentence.] But the call can't return until it gets the event. So you're deadlocked.
That problem would go away if you simply host the game code on another thread. You'd still need to synchronize the threads, so MakeAGuess() doesn't return until ready, but it's certainly doable.
If you want to keep everything single-threaded you may want to consider a different model. Game could notify Players it's their turn with an event but leave it to players to initiate operations on the Game.