I use cocos2d 1.1, xCode 4.5 for my game's progect. I would like to recode my game for support the iPhone 5. But I faced with problem: cocos2d 1.1 can't detected sprite for retina 4 inch.
Default-568h#2x.png - work fine, but the game's sprites appears as *-hd.png.
It seems cocos2d 1.1 can detected just *-hd.png, however I added the sprites *-568h#2x.png.
Sorry for my English.
The solving of this problem is in CCFileUtils.m file as written below sergio.
I did the little changes in method +(NSString*) getDoubleResolutionImage:(NSString*)path
+(NSString*) getDoubleResolutionImage:(NSString*)path
{
#if CC_IS_RETINA_DISPLAY_SUPPORTED
if( CC_CONTENT_SCALE_FACTOR() == 2 )
{
NSString *pathWithoutExtension = [path stringByDeletingPathExtension];
NSString *name = [pathWithoutExtension lastPathComponent];
NSString *extension = [path pathExtension];
if( [extension isEqualToString:#"ccz"] || [extension isEqualToString:#"gz"] )
{
extension = [NSString stringWithFormat:#"%#.%#", [pathWithoutExtension pathExtension], extension];
pathWithoutExtension = [pathWithoutExtension stringByDeletingPathExtension];
}
CGFloat screenHeight = [UIScreen mainScreen].bounds.size.height;
if ([UIScreen mainScreen].scale == 2.f && screenHeight == 568.0f)
{
if( [name rangeOfString:CC_RETINA4_DISPLAY_FILENAME_SUFFIX].location != NSNotFound ) {
CCLOG(#"cocos2d: WARNING Filename(%#) already has the suffix %#. Using it.", name, CC_RETINA4_DISPLAY_FILENAME_SUFFIX);
return path;
}
NSString *retinaName = [pathWithoutExtension stringByAppendingString:CC_RETINA4_DISPLAY_FILENAME_SUFFIX];
retinaName = [retinaName stringByAppendingPathExtension:extension];
if( [__localFileManager fileExistsAtPath:retinaName] )
{
return retinaName;
}
}
if( [name rangeOfString:CC_RETINA_DISPLAY_FILENAME_SUFFIX].location != NSNotFound ) {
CCLOG(#"cocos2d: WARNING Filename(%#) already has the suffix %#. Using it.", name, CC_RETINA_DISPLAY_FILENAME_SUFFIX);
return path;
}
NSString *retinaName = [pathWithoutExtension stringByAppendingString:CC_RETINA_DISPLAY_FILENAME_SUFFIX];
retinaName = [retinaName stringByAppendingPathExtension:extension];
if( [__localFileManager fileExistsAtPath:retinaName] )
{
return retinaName;
}
CCLOG(#"cocos2d: CCFileUtils: Warning HD file not found: %#", [retinaName lastPathComponent] );
}
#endif // CC_IS_RETINA_DISPLAY_SUPPORTED
return path;
}
and also add in file ccConfig.h
#ifndef CC_RETINA4_DISPLAY_FILENAME_SUFFIX
#define CC_RETINA4_DISPLAY_FILENAME_SUFFIX #"-568h#2x"
#endif
If someone have a notice, please write
As far as I know, there is no general support in Cocos2D 2.x for iPhone 5 -568h#2x images.
The only iPhone 5 specific support that was added to cocos2D 2.1 concerns the addition of a Default-568h#2x.png image to the Xcode template. Read the ChangeLog for details.
On the other hand, it is also true that in UIKit too there is no support for "-568h#2x images", so I don't think that cocos2D is going to add one.
On a more conceptual level, I understand that the general approach to supporting iPhone 5 resolution is not at the bitmap level (i.e., providing differently scaled images), rather at the layout level (i.e., changing the disposition or sizes of non-image UI elements). (If you think about it, we already have to manage x1 and x2 images, both for iPhone and iPad: this means 4 different versions for each image; adding another dimension to this would be crazy.)
If your app really does need using scaled images, then I guess you are on your own both when using UIKit and when using cocos2D.
On the bright side, if you give a look at CCFileUtils.h you can easily change it so that it supports the -568h#2x. If you want a discussion of this, have a look at this blog post which describes an analogous change but for the iPad 3. It might help you in building your own solution.
You can change suffix in ccConfig.h file
#ifndef CC_RETINA_DISPLAY_FILENAME_SUFFIX
#define CC_RETINA_DISPLAY_FILENAME_SUFFIX #"-hd"
#endif
Related
https://github.com/cocos2d/cocos2d-x/blob/v4/cocos/editor-support/spine/Json.cpp
I need help loading a .txt and pulling out some text using cocos for an old App. Can anyone work up a simple example?
The backstory is that I wrote a working app about 5-6 years ago when cocos used a different json library. They changed the library and I can't decipher the new one enough to get it working again. I am not a programmer, but made the app as a favor for a hospital. The json is used to switch between languages for the script. I don't really even know how to ask a technical question about the library. I know the code is all there, but I don't know how to make it work...
Thanks :)
cocos2dx v4 Json implementation
This is what I figured out eventually. Any improvements you can suggest are welcome.
I use this to read json-response from a translation api:
std::vector<char> * buffer = response->getResponseData();
char * concatenated = (char *) malloc(buffer->size() + 1);
std::string s2(buffer->begin(), buffer->end());
strcpy(concatenated, s2.c_str());
CCLOG ("DEBUG |%s|", concatenated);
Json * json = Json_create(concatenated);
Json *responseData = Json_getItem(json, "responseData");
const char * var22 = Json_getString(responseData, "translatedText", "default");
USED JSON response
{"responseData":
{"translatedText":"ni\u00f1o"}, .....
and copyed the old json.c and json.h in my classes dir.
static void readCurve (Json* frame, spCurveTimeline* timeline, int frameIndex) {
Json* curve = Json_getItem(frame, "curve");
if (!curve) return;
if (curve->type == Json_String && strcmp(curve->valueString, "stepped") == 0)
spCurveTimeline_setStepped(timeline, frameIndex);
else if (curve->type == Json_Array) {
Json* child0 = curve->child;
Json* child1 = child0->next;
Json* child2 = child1->next;
Json* child3 = child2->next;
spCurveTimeline_setCurve(timeline, frameIndex, child0->valueFloat, child1->valueFloat, child2->valueFloat,
child3->valueFloat);
}
}
I'm creating a game in (modern) opengl, c++, glm, glfw and openvr.h and currently learning from provided example source code (hellovr_opengl_main.cpp). There are apis found in openvr to provide both the view and projection matrices and used in combination would replicate what your eyes would see in real life through virtual reality goggles.
So I implemented framebuffers into my opengl application and everything worked perfectly so all ok there then followed the examples and tried to access the apis from openvr.h myself, and after that failed i downright copied and pasted from their example code the entire hierarchy of everything that was relevant to this problem and called everything in the same order as they appeared from the example and that didnt work either so ive tried everything i can think of and i cant figure it out
Also i use there matrix definitions as they are until it reaches my code where i convert it to glm::mat4, but i did get the viewmatrix working w/o any modification (like column major)
glm::mat4 CMainApplication::GetCurrentViewProjectionMatrix(vr::Hmd_Eye nEye)
{
Matrix4 i;
if (nEye == vr::Eye_Left)
{
i = m_mat4ProjectionLeft * m_mat4eyePosLeft * m_mat4HMDPose;
}
else if (nEye == vr::Eye_Right)
{
i = m_mat4ProjectionRight * m_mat4eyePosRight * m_mat4HMDPose;
}
Matrix4 i2 = m_mat4eyePosRight * m_mat4HMDPose;//works!
view = glm::mat4(i2[0], i2[1], i2[2], i2[3], i2[4], i2[5], i2[6], i2[7], i2[8], i2[9],
i2[10], i2[11], i2[12], i2[13], i2[14], i2[15]);
Matrix4 i3 = m_mat4ProjectionRight;
project = glm::mat4(i3[0], i3[1], i3[2], i3[3], i3[4], i3[5], i3[6], i3[7], i3[8], i3[9],
i3[10], i3[11], i3[12], i3[13], i3[14], i3[15]);//doesnt work
project = glm::mat4(i3[0], i3[4], i3[8], i3[12], i3[1], i3[5], i3[9], i3[13], i3[2], i3[6],
i3[10], i3[14], i3[3], i3[7], i3[11], i3[15]);//row>>column //doesnt work
return glm::mat4(i[0], i[4], i[8], i[12], i[1], i[5], i[9], i[13], i[2], i[6],
i[10], i[14], i[3], i[7], i[11], i[15]);//doesnt work
return glm::mat4(i[0], i[4], i[8], i[12], i[1], i[5], i[9], i[13], i[2], i[6],
i[10], i[14], i[3], i[7], i[11], i[15]);//row>>column //doesnt work
return glm::mat4(i[12], i[13], i[14], i[15], i[4], i[5], i[6], i[7], i[8], i[9],
i[10], i[11], i[0], i[1], i[2], i[3]);//replace top w/ bottom //doesnt work
}
I have a project for Windows CE that uses XAML for Windows Embedded (Compact 2013) (also known as "Silverlight for Windows Embedded") for the GUI.
I defined an image in xaml and now I want to switch this image in the c++ code behind part.
How do I do this?
I found this solution:
m_pBatteryStateImageis the image, defined in Xaml.
The URIs for the images can be found in the auto generated file PROJECTNAMEGenerated.rc2
void MainPage::SetBatteryState(BatteryStateFlags batteryState)
{
BSTR src = GetImageSourceUri(batteryState);
SetImage(src);
}
void MainPage::SetImage(BSTR src)
{
IXRApplication* application;
App::GetApplication(&application);
//Check which uri is currently used:
BSTR originalSrc;
IXRImageSource* iSource;
m_pBatteryStateImage->GetSource(&iSource);
IXRBitmapImagePtr bmpSrc = (IXRBitmapImagePtr)iSource;
bmpSrc->GetUriSource(&originalSrc);
//Set new image if source uri is different
if (wcscmp(originalSrc,src)!=0)
{
IXRBitmapImagePtr bitmapImage;
application->CreateObject(IID_IXRBitmapImage, &bitmapImage);
bitmapImage->SetUriSource(src);
m_pBatteryStateImage->SetSource(bitmapImage);
}
}
BSTR MainPage::GetImageSourceUri(BatteryStateFlags batteryState)
{
BSTR src;
//see PROJECTNAMEGenerated.rc2 - the numbers will change if images are added (they are alphabetically sorted).
//TODO make it robust against changes
if(batteryState & BatteryChargerError)
src = TEXT("#105");
else if(batteryState & BatteryHigh)
src = TEXT("#106");
else if(batteryState & BatteryLow)
src = TEXT("#109");
else
//Show error if nothing else matches (Should not happen)
src = TEXT("#105");
return src;
}
I am trying to use CMMotionManager to update the attitude of a camera viewpoint in scenekit. I am able to get the following code using the default reference to work.
manager.deviceMotionUpdateInterval = 0.01
manager.startDeviceMotionUpdates(to: motionQueue, withHandler:{ deviceManager, error in
if (deviceManager?.attitude) != nil {
let rotation = deviceManager?.attitude.quaternion
OperationQueue.main.addOperation {
self.cameraNode.rotation = SCNVector4(rotation!.x,rotation!.y,rotation!.z,rotation!.w)
}
}
})
I am however unable to get startDeviceMotionUpdates to work with a selected reference frame as shown below:
manager.deviceMotionUpdateInterval = 0.01
manager.startDeviceMotionUpdates(using: CMAttitudeReferenceFrameXMagneticNorthZVertical, to: motionQueue, withHandler:{ deviceManager, error in
if (deviceManager?.attitude) != nil {
let rotation = deviceManager?.attitude.quaternion
OperationQueue.main.addOperation {
self.cameraNode.rotation = SCNVector4(rotation!.x,rotation!.y,rotation!.z,rotation!.w)
}
}
})
The error i receive is:
Use of unresolved identifier 'CMAttitudeReferenceFrameXMagneticNorthZVertical'
I get similar error messages for other reference frames as well. Can anyone shed any light on the use of the "using:" parameter for the startDeviceMotionUpdates function? All the examples i have found are for older versions of swift or objective c so it is quite possible that it is simply an issue with not understanding Swift 3 syntax.
After some additional fiddling i figured out that the using argument expects a member of the new CMAttitudeReferenceFrame struct. i.e. it should be passed as:
manager.deviceMotionUpdateInterval = 0.01
manager.startDeviceMotionUpdates(using: CMAttitudeReferenceFrame.xMagneticNorthZVertical
,to: motionQueue, withHandler:{
deviceManager, error in
if (deviceManager?.attitude) != nil {
let rotation = deviceManager?.attitude.quaternion
OperationQueue.main.addOperation {
self.cameraNode.rotation = SCNVector4(rotation!.x,rotation!.y,rotation!.z,rotation!.w)
}
}
})
This is a change from earlier version that allowed the direct use of constants such as "CMAttitudeReferenceFrameXMagneticNorthZVertical"
I'm having trouble getting AVAudioEngine (OS X) to play nice with all sample rates.
Here's my code for building the connections:
- (void)makeAudioConnections {
auto hardwareFormat = [self.audioEngine.outputNode outputFormatForBus:0];
auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:hardwareFormat.sampleRate channels:2];
NSLog(#"format: %#", format);
#try {
[self.audioEngine connect:self.avNode to:self.audioEngine.mainMixerNode format:format];
[self.audioEngine connect:self.audioEngine.inputNode to:self.avNode format:format];
} #catch(NSException* e) {
NSLog(#"exception: %#", e);
}
}
On my audio interface, the render callback is called for 44.1, 48, and 176.4kHz. It is not called for 96 and 192 kHz. On the built-in audio, the callback is called for 44.1, 48, 88 but not 96.
My AU's allocateRenderResourcesAndReturnError is being called for 96kHz. No errors are returned.
- (BOOL) allocateRenderResourcesAndReturnError:(NSError * _Nullable *)outError {
if(![super allocateRenderResourcesAndReturnError:outError]) {
return NO;
}
_inputBus.allocateRenderResources(self.maximumFramesToRender);
_sampleRate = _inputBus.bus.format.sampleRate;
return YES;
}
Here's my AU's init method, which is mostly just cut & paste from Apple's AUv3 demo:
- (instancetype)initWithComponentDescription:(AudioComponentDescription)componentDescription options:(AudioComponentInstantiationOptions)options error:(NSError **)outError {
self = [super initWithComponentDescription:componentDescription options:options error:outError];
if (self == nil) {
return nil;
}
// Initialize a default format for the busses.
AVAudioFormat *defaultFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:2];
// Create the input and output busses.
_inputBus.init(defaultFormat, 8);
_outputBus = [[AUAudioUnitBus alloc] initWithFormat:defaultFormat error:nil];
// Create the input and output bus arrays.
_inputBusArray = [[AUAudioUnitBusArray alloc] initWithAudioUnit:self busType:AUAudioUnitBusTypeInput busses: #[_inputBus.bus]];
_outputBusArray = [[AUAudioUnitBusArray alloc] initWithAudioUnit:self busType:AUAudioUnitBusTypeOutput busses: #[_outputBus]];
self.maximumFramesToRender = 256;
return self;
}
To keep things simple, I'm setting the sample rate before starting the app.
I'm not sure where to begin tracking this down.
Update
Here's a small project which reproduces the issue I'm having:
Xcode project to reproduce issue
You'll get errors pulling from the input at certain sample rates.
On my built-in audio running at 96kHz the render block is called with alternating 511 and 513 frame counts and errors -10863 (kAudioUnitErr_CannotDoInCurrentContext) and -10874 (kAudioUnitErr_TooManyFramesToProcess) respectively. Increasing maximumFramesToRender doesn't seem to help.
Update 2
I simplified my test down to just connecting the input to the main mixer:
[self.audioEngine connect:self.audioEngine.inputNode to:self.audioEngine.mainMixerNode format:nil];
I tried explicitly setting the format argument.
This still will not play through at 96kHz. So I'm thinking this may be a bug in AVAudioEngine.
For play-through with AVAudioEngine, the input and output hardware formats and all the connection formats must be at the same sample rate. So the following should work.
AVAudioFormat *outputHWFormat = [self.audioEngine.outputNode outputFormatForBus:0];
AVAudioFormat *inputHWFormat = [self.audioEngine.inputNode inputFormatForBus:0];
if (inputHWFormat.sampleRate == outputHWFormat.sampleRate) {
[self.audioEngine connect:self.audioEngine.inputNode to:self.audioEngine.mainMixerNode format:inputHWFormat];
[self.audioEngine connect:self.audioEngine.mainMixerNode to:self.audioEngine.outputNode format:inputHWFormat];
}