cocos2d, can we call a function in "sequence" on a sprite? - cocos2d-iphone

first I am moving the sprite from a to b. when it reaches b, I need to call a function to work on the sprite, i.e fade, change image etc. I am currently doing it with a "sequence".
var seq_action = cc.Sequence.create(move_action, this.check_basket_under());
But it gives me this error -
CCActionInterval.js:507 Uncaught TypeError: Cannot read property '_timesForRepeat' of undefined
at Function.cc.sequence [as create] (CCActionInterval.js:507)
at Class.sprite_create (app.js:110)
at Class.trigger (CCScheduler.js:261)
at Class.update (CCScheduler.js:167)
at Class.update (CCScheduler.js:480)
at Class.drawScene (CCDirector.js:226)
at Class.mainLoop (CCDirector.js:884)
at callback (CCBoot.js:2160)

In Sequence, you can't directly call function. Add callFunc or callBlock
Here is c++ version example:
auto scale = EaseElasticOut::create(ScaleTo::create(3.0f, 1.0f));
auto calb = CallFunc::create( [this] () {
this->check_basket_under( );
});
auto seq = Sequence::create(scale, calb, NULL);
GameOverLabel->runAction(seq);
For more info google CallFunc, CallFuncN usage in Sequence.

Related

Read a list of parameters from a LuaRef using LuaBridge

[RESOLVED]
I'm building a game engine that uses LuaBridge in order to read components for entities. In my engine, an entity file looks like this, where "Components" is a list of the components that my entity has and the rest of parameters are used to setup the values for each individual component:
-- myEntity.lua
Components = {"MeshRenderer", "Transform", "Rigidbody"}
MeshRenderer = {
Type = "Sphere",
Position = {0,300,0}
}
Transform = {
Position = {0,150,0},
Scale = {1,1,1},
Rotation = {0,0,0}
}
Rigidbody = {
Type = "Sphere",
Mass = 1
}
I'm currently using this function (in C++) in order to read the value from a parameter (given its name) inside a LuaRef.
template<class T>
T readParameter(LuaRef& table, const std::string& parameterName)
{
try {
return table.rawget(parameterName).cast<T>();
}
catch (std::exception e) {
// std::cout ...
return NULL;
}
}
For example, when calling readVariable<std::string>(myRigidbodyTable, "Type"), with myRigidbodyTable being a LuaRef with the values of Rigidbody, this function should return an std::string with the value "Sphere".
My problem is that when I finish reading and storing the values of my Transform component, when I want to read the values for "Ridigbody" and my engine reads the value "Type", an unhandled exception is thrown at Stack::push(lua_State* L, const std::string& str, std::error_code&).
I am pretty sure that this has to do with the fact that my component Transform stores a list of values for parameters like "Position", because I've had no problems while reading components that only had a single value for each parameter. What's the right way to do this, in case I am doing something wrong?
I'd also like to point out that I am new to LuaBridge, so this might be a beginner problem with a solution that I've been unable to find. Any help is appreciated :)
Found the problem, I wasn't reading the table properly. Instead of
LuaRef myTable = getGlobal(state, tableName.c_str());
I was using the following
LuaRef myTable = getGlobal(state, tableName.c_str()).getMetatable();

Why is the lambda function not copying out the struct?

I'm modifying a code-base to get a struct (TheClipInfo) out of a Lambda so I can return one of it's properties (CurrentFrameCount).
I think I'm passing it by reference, but clearly I'm missing something.
Comments below show where I modified the code.
int32 UTimeSynthComponent::StopClipOffset(FTimeSynthClipHandle InClipHandle, ETimeSynthEventClipQuantization EventQuantization)
{
Audio::EEventQuantization StopQuantization = GlobalQuantization;
if (EventQuantization != ETimeSynthEventClipQuantization::Global)
{
int32 ClipQuantizationEnumIndex = (int32)EventQuantization;
check(ClipQuantizationEnumIndex >= 1);
StopQuantization = (Audio::EEventQuantization)(ClipQuantizationEnumIndex - 1);
}
FPlayingClipInfo TheClipInfo; // I want the Lambda to put data here.
SynthCommand([this, InClipHandle, StopQuantization, &TheClipInfo] // The first Lambda
{
EventQuantizer.EnqueueEvent(StopQuantization,
[this, InClipHandle, &TheClipInfo](uint32 NumFramesOffset) // The Second Lambda
{
int32* PlayingClipIndex = ClipIdToClipIndexMap_AudioRenderThread.Find(InClipHandle.ClipId);
if (PlayingClipIndex)
{
// Grab the clip info
FPlayingClipInfo& PlayingClipInfo = PlayingClipsPool_AudioRenderThread[*PlayingClipIndex]; // The Struct I want to get out.
// Only do anything if the clip is not yet already fading
if (PlayingClipInfo.CurrentFrameCount < PlayingClipInfo.DurationFrames)
{
// Adjust the duration of the clip to "spoof" it's code which triggers a fade this render callback block.
PlayingClipInfo.DurationFrames = PlayingClipInfo.CurrentFrameCount + NumFramesOffset;
}
TheClipInfo = PlayingClipInfo; // I think this should make a copy.
}
});
});
return TheClipInfo.CurrentFrameCount; // This is always returning 0.
}
I'm assuming this is all happening in the same thread and in order (not some async callback like JavaScript).
My first attempt was with an int32, but that can't be passed by reference. I really want just one value from it.
Snap!
EventQuantizer.EnqueueEvent
Adds the second lambda to an event queue. So it is asynch. Which is why it's not working as I'd like.
The simplest thing that could work is to copy the code out of the lambda and use it for the return.

Why glium `Headless` can not render an image like normal window context?

I am working on an off-screen render program and I use crate glium to do this. I have followed the example of screenshot.rs and this example worked well.
Then I made some change:
The orignal code was
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let wb = glutin::WindowBuilder::new().with_visible(true);
let cb = glutin::ContextBuilder::new();
let display = glium::Display::new(wb, cb, &event_loop).unwrap();
// building the vertex buffer, which contains all the vertices that we will draw
I grouped these codes into a function:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::Display {
let version = parse_version(); //this will return `OpenGL 3.3`
let wb = glutin::WindowBuilder::new()
.with_visibility(false)
.with_dimensions(glutin::dpi::LogicalSize::from(size));
let cb = glutin::ContextBuilder::new()
.with_gl(version);
glium::Display::new(wb, cb, &event_loop).unwrap()
}
After this modification, the program still worked well. So I continued to add the headless-context:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display_headless((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display_headless(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::HeadlessRenderer {
let version = parse_version(); // this will return `OpenGL 3.3`
let ctx = glutin::ContextBuilder::new()
.with_gl(version)
.build_headless(&event_loop, glutin::dpi::PhysicalSize::from(size))
.expect("1");
//let ctx = unsafe { ctx.make_current().expect("3") };
glium::HeadlessRenderer::new(ctx).expect("4")
}
But this time, the program did not work. There was no panic during the running, but the output image was empty with black, and its size was not 128x128 but 800x600.
I have tried to remove the libEGL.dll so that, due to the doc of crate glutin, the function .build_headless will build a window and hide it, just as my function build_display does. However, this failed too. So what can cause this?

Scala.js js.Dynamic usage leads to a recursively infinite data structure

I'm trying to use Google Visualizations from Scala.js. I generated the type definitions using TS importer and the relevant portion it generated is:
#js.native
trait ColumnChartOptions extends js.Object {
var aggregationTarget: String = js.native
var animation: TransitionAnimation = js.native
var annotations: ChartAnnotations = js.native
// ... more
}
#js.native
trait TransitionAnimation extends js.Object {
var duration: Double = js.native
var easing: String = js.native
var startup: Boolean = js.native
}
Now, I'm trying to figure out how to actually use this and came up with:
val options = js.Dynamic.literal.asInstanceOf[ColumnChartOptions]
options.animation = js.Dynamic.literal.asInstanceOf[TransitionAnimation] // comment this and the next line and chart will appear
options.animation.duration = 2000
options.title = "Test Chart"
options.width = 400
options.height = 300
This works if I don't set the animation settings, but fails with the chart showing "Maximum call stack size exceeded" if I do.
I debugged, and found the following:
So animation contains a reference to itself, but I don't feel like this should happen based on the code above.
Ideas how to fix it?
Any other suggestions on how to best use the generated types to provide a type-safe way of creating the JavaScript objects which Google Visualizations expects? I tried new ColumnChartOptions {} which looks cleaner than js.Dynamic but that failed with "A Scala.js-defined JS class cannot directly extend a native JS trait."
P.S. I'd like to note that
options.animation = js.Dynamic.literal(
easing = "inAndOut",
startup = true,
duration = 2000
).asInstanceOf[TransitionAnimation]
actually works, but isn't type-safe (a mis-spelling of duration to durration won't be caught).
Your code lacks () when calling literal(), so the fix would be:
val options = js.Dynamic.literal().asInstanceOf[ColumnChartOptions]
options.animation = js.Dynamic.literal().asInstanceOf[TransitionAnimation] // comment this and the next line and chart will appear
In Scala (and therefore in Scala.js), the presence or absence of () is sometimes meaningful. literal is the singleton object literal, whereas literal() calls the method apply() of said object.

Why is the computed property being updated even though I didn't specify its dependencies?

I have a schema structured something like this:
App = {};
App.Outer = Ember.Object.extend({
inner: null,
quantity: 0,
count: function () {
var self = this, inner = self.get('inner');
return self.get('quantity') * inner.get('count');
}.property('nothing')
});
App.Inner = Ember.Object.extend({
count: 0
});
Yes, the 'count' computed property really is set to depend on a totally nonexistent property 'nothing'. However it seems to get updated anyway:
var o1 = App.Outer.create({
quantity: 2,
inner: App.Inner.create({count: 4})
});
console.log(o1.get('count')); // => 8
o1.get('inner').set('count', 5);
console.log(o1.get('count')); // => 10
o1.set('inner', App.Inner.create({count: 10}));
console.log(o1.get('count')); // => 20
Am I missing something? It knows what to update without me telling it what to depend on... can't be right, can it? What am I misunderstanding about Ember computed properties?
Thanks
By using this.get('quantity'), inner.get('count') you are telling it what it depends on. Every time you call .get('count') the function will go off and get the current values for those properties and therefore return the up to date result.
The .property() part comes into play when you bind the computed property count to something else e.g. a view. When you do that then making a change to quantity will automatically recalculate the count, and this new value will be propagated to whatever you have bound the count too.
You can see the difference in action here: http://jsfiddle.net/tomwhatmore/6gz8x/
As of Ember 0.9.5, property values are not cached unless cacheable() is called on them. e.g.
...
count: function () {
var self = this, inner = self.get('inner');
return self.get('quantity') * inner.get('count');
}.property('nothing').cacheable()
...
For more background, see the discussion on this GitHub issue: https://github.com/emberjs/ember.js/issues/38