Image not working in custom MKOverlayRenderer (RubyMotion) - rubymotion

I have a RubyMotion app with a MKMapView in a controller that I am trying to add an image overlay to.
I'm adding the overlay here (the delegate of the MKMapView instance is set to the controller itself):
image = UIImage.imageNamed("map").CGImage
lowerLeft = CLLocationCoordinate2DMake(21.652538062803, -127.620375523875420)
upperRight = CLLocationCoordinate2DMake(50.406626367301044, -66.517937876818)
overlay = ImageOverlay.alloc.initWithImageData image, withLowerLeftCoordinate:lowerLeft, withUpperRightCoordinate:upperRight
self.view.addOverlay overlay
Here's my custom overlay:
class ImageOverlay
attr_accessor :imageData
attr_accessor :mapRect
def initWithImageData imageData, withLowerLeftCoordinate:lowerLeftCoordinate, withUpperRightCoordinate:upperRightCoordinate
self.imageData = imageData
lowerLeft = MKMapPointForCoordinate(lowerLeftCoordinate)
upperRight = MKMapPointForCoordinate(upperRightCoordinate)
self.mapRect = MKMapRectMake(lowerLeft.x, upperRight.y, upperRight.x - lowerLeft.x, lowerLeft.y - upperRight.y)
return self
end
def coordinate
return MKCoordinateForMapPoint(MKMapPointMake(MKMapRectGetMidX(self.mapRect), MKMapRectGetMidY(self.mapRect)))
end
def boundingMapRect
return self.mapRect
end
end
And here is the custom MKOverlayRenderer:
class ImageOverlayRenderer < MKOverlayRenderer
def drawMapRect mapRect, zoomScale:zoomScale, inContext:context
puts "drawMapRect"
theMapRect = self.overlay.boundingMapRect
theRect = self.rectForMapRect(theMapRect)
CGContextScaleCTM(context, 1.0, -1.0)
CGContextTranslateCTM(context, 0.0, -theRect.size.height)
CGContextDrawImage(context, theRect, self.overlay.imageData)
end
end
And in my view controller I am overriding the mapView:rendererForOverlay method:
def mapView mapView, rendererForOverlay:overlay
if overlay.isKindOfClass(ImageOverlay)
renderer = ImageOverlayRenderer.alloc.initWithOverlay overlay
return renderer
end
return nil
end
The problem is that drawMapRect is never called and the app crashes with no error except stating a crash report may have been generated, which contains this:
Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000
Everything else seems to work up until that point, mapView:rendererForOverlay is invoked and I am returning the renderer. I even override canDrawMapRect and it was being invoked.
Any ideas on how to get this working?

The problem was related to bug in RubyMotion which has been fixed and will be made available in the next release:
http://hipbyte.myjetbrains.com/youtrack/issue/RM-533
Update: The fix has been released in RubyMotion 2.31 and I have verified it resolves my issue.

Related

Why glium `Headless` can not render an image like normal window context?

I am working on an off-screen render program and I use crate glium to do this. I have followed the example of screenshot.rs and this example worked well.
Then I made some change:
The orignal code was
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let wb = glutin::WindowBuilder::new().with_visible(true);
let cb = glutin::ContextBuilder::new();
let display = glium::Display::new(wb, cb, &event_loop).unwrap();
// building the vertex buffer, which contains all the vertices that we will draw
I grouped these codes into a function:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::Display {
let version = parse_version(); //this will return `OpenGL 3.3`
let wb = glutin::WindowBuilder::new()
.with_visibility(false)
.with_dimensions(glutin::dpi::LogicalSize::from(size));
let cb = glutin::ContextBuilder::new()
.with_gl(version);
glium::Display::new(wb, cb, &event_loop).unwrap()
}
After this modification, the program still worked well. So I continued to add the headless-context:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display_headless((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display_headless(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::HeadlessRenderer {
let version = parse_version(); // this will return `OpenGL 3.3`
let ctx = glutin::ContextBuilder::new()
.with_gl(version)
.build_headless(&event_loop, glutin::dpi::PhysicalSize::from(size))
.expect("1");
//let ctx = unsafe { ctx.make_current().expect("3") };
glium::HeadlessRenderer::new(ctx).expect("4")
}
But this time, the program did not work. There was no panic during the running, but the output image was empty with black, and its size was not 128x128 but 800x600.
I have tried to remove the libEGL.dll so that, due to the doc of crate glutin, the function .build_headless will build a window and hide it, just as my function build_display does. However, this failed too. So what can cause this?

Swift 3 load .dae file into SCNNode

Loading a .dae file as a scene element
This code works, loading the file as the scene:
let scene = SCNScene(named: "art.scnassets/base-wall-tile_sample.dae")!
This code, loading the file as SCNGeometry, doesn't:
let url = Bundle.main.url(forResource: "art.scnassets/base-wall-tile_sample", withExtension: "dae")
let source = SCNSceneSource(url: url! )
let geo = source!.entryWithIdentifier("Geo", withClass: SCNGeometry.self)!
url and source are ok, but it crashes trying to produce geo. Bad instruction.
This code, like several examples offered on the web, was in Swift 2 (load a collada (dae) file into SCNNode (Swift - SceneKit). I had to juggle it to Swift 3, and something seems to have been lost in translation. Can someone tell me how to do this stuff right?
A .dae file is always loaded as a SCNScene. You need to name the node containing the geometry you want to add.
Than you can load the scene, filter it for the node with the given name and add it to your scene.
func addNode(named nodeName, fromSceneNamed: sceneName, to scene: SCNScene) {
if let loadedScene = SCNScene(named: sceneName),
let node = loadedScene.rootNode.childNode(withName: nodeName, recursivly: true) {
scene.rootNode.addChildNode(node)
}
}
guard let shipScene = SCNScene(named: "ship.dae") else { return }
let shipNode = SCNNode()
let shipSceneChildNodes = shipScene.rootNode.childNodes
for childNode in shipSceneChildNodes {
shipNode.addChildNode(childNode)
}
node.addChildNode(shipNode)

AVAudioUnit (OS X) render block only called for certain sample rates

I'm having trouble getting AVAudioEngine (OS X) to play nice with all sample rates.
Here's my code for building the connections:
- (void)makeAudioConnections {
auto hardwareFormat = [self.audioEngine.outputNode outputFormatForBus:0];
auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:hardwareFormat.sampleRate channels:2];
NSLog(#"format: %#", format);
#try {
[self.audioEngine connect:self.avNode to:self.audioEngine.mainMixerNode format:format];
[self.audioEngine connect:self.audioEngine.inputNode to:self.avNode format:format];
} #catch(NSException* e) {
NSLog(#"exception: %#", e);
}
}
On my audio interface, the render callback is called for 44.1, 48, and 176.4kHz. It is not called for 96 and 192 kHz. On the built-in audio, the callback is called for 44.1, 48, 88 but not 96.
My AU's allocateRenderResourcesAndReturnError is being called for 96kHz. No errors are returned.
- (BOOL) allocateRenderResourcesAndReturnError:(NSError * _Nullable *)outError {
if(![super allocateRenderResourcesAndReturnError:outError]) {
return NO;
}
_inputBus.allocateRenderResources(self.maximumFramesToRender);
_sampleRate = _inputBus.bus.format.sampleRate;
return YES;
}
Here's my AU's init method, which is mostly just cut & paste from Apple's AUv3 demo:
- (instancetype)initWithComponentDescription:(AudioComponentDescription)componentDescription options:(AudioComponentInstantiationOptions)options error:(NSError **)outError {
self = [super initWithComponentDescription:componentDescription options:options error:outError];
if (self == nil) {
return nil;
}
// Initialize a default format for the busses.
AVAudioFormat *defaultFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:2];
// Create the input and output busses.
_inputBus.init(defaultFormat, 8);
_outputBus = [[AUAudioUnitBus alloc] initWithFormat:defaultFormat error:nil];
// Create the input and output bus arrays.
_inputBusArray = [[AUAudioUnitBusArray alloc] initWithAudioUnit:self busType:AUAudioUnitBusTypeInput busses: #[_inputBus.bus]];
_outputBusArray = [[AUAudioUnitBusArray alloc] initWithAudioUnit:self busType:AUAudioUnitBusTypeOutput busses: #[_outputBus]];
self.maximumFramesToRender = 256;
return self;
}
To keep things simple, I'm setting the sample rate before starting the app.
I'm not sure where to begin tracking this down.
Update
Here's a small project which reproduces the issue I'm having:
Xcode project to reproduce issue
You'll get errors pulling from the input at certain sample rates.
On my built-in audio running at 96kHz the render block is called with alternating 511 and 513 frame counts and errors -10863 (kAudioUnitErr_CannotDoInCurrentContext) and -10874 (kAudioUnitErr_TooManyFramesToProcess) respectively. Increasing maximumFramesToRender doesn't seem to help.
Update 2
I simplified my test down to just connecting the input to the main mixer:
[self.audioEngine connect:self.audioEngine.inputNode to:self.audioEngine.mainMixerNode format:nil];
I tried explicitly setting the format argument.
This still will not play through at 96kHz. So I'm thinking this may be a bug in AVAudioEngine.
For play-through with AVAudioEngine, the input and output hardware formats and all the connection formats must be at the same sample rate. So the following should work.
AVAudioFormat *outputHWFormat = [self.audioEngine.outputNode outputFormatForBus:0];
AVAudioFormat *inputHWFormat = [self.audioEngine.inputNode inputFormatForBus:0];
if (inputHWFormat.sampleRate == outputHWFormat.sampleRate) {
[self.audioEngine connect:self.audioEngine.inputNode to:self.audioEngine.mainMixerNode format:inputHWFormat];
[self.audioEngine connect:self.audioEngine.mainMixerNode to:self.audioEngine.outputNode format:inputHWFormat];
}

Tkinter button doesn't respond (has no mouse over effect)

I'm writing a game that has info that is communicated from client to server and from server to client. One specific (non-playing) client is the monitor, which only displays the game board and players. This works fine, the only thing that doesn't work is the quit button on the GUI. A minor thing, but I would like it to work. :) Plus I think that there might be something pretty wrong with the code, even though it works.
I tried all kind of different commands (sys.exit, quit...) and nothing fixed it.
There's no error message, nothing happens with the button at all. No mouse over effect, nothing if I click it. Relevant code (I removed matrix and server logic because I think it's irrelevant - if it isn't I'll post it):
class Main():
def __init__(self, master):
self.frame = Frame(master)
self.frame.pack()
# Has to be counted up by server class
rounds = 0
# Has to be communicated by server class. If numberwin == numberrobots,
# game is won
numberwin = 0
numberrobots = 2
def draw(self):
if hasattr(self, 'info'):
self.info.destroy()
if hasattr(self, 'quit'):
self.quit.destroy()
print "Main should draw this matrix %s" % self.matrix
[...] lots of matrix stuff [...]
# Pop-Up if game was won
# TODO: Make GUI quittable
if self.numberwin == self.numberrobots:
self.top = Toplevel()
self.msg = Message(self.top, text="This game was won!")
self.msg.pack(side=LEFT)
self.quittop = Button(
self.top, text="Yay", command=self.frame.destroy)
self.quittop.pack(side=BOTTOM)
# TODO: Quit GUI
self.quit = Button(self.frame, text="Quit", command=self.frame.destroy)
self.quit.pack(side=BOTTOM)
# Information on the game
self.info = Label(
self.frame, text="Rounds played: {}, Numbers of robots in win condition: {}".format(self.rounds, self.numberwin))
self.info.pack(side=TOP)
def canvasCreator(self, numberrows, numbercolumns):
# Game board
self.canvas = Canvas(
self.frame, width=numbercolumns * 100 + 10, height=numberrows * 100 + 10)
self.canvas.pack()
class Agent(Protocol, basic.LineReceiver):
master = Tk()
main = Main(master)
# So first matrix is treated differently from later matrixes
flagFirstMatrix = 1
def connectionMade(self):
msg = dumps({"type": "monitor"})
self.sendLine(msg)
print "Sent message:", msg
def dataReceived(self, data):
# Decode the json dump
print "Data received: %s" % data
data = loads(data)
self.main.matrix = np.matrix(data["positions"])
self.main.goals = np.matrix(data["goals"])
self.main.next_move_by_agent = data["next_move"]
self.main.rounds = data["rounds"]
self.main.numberwin = data["win_states"]
if self.flagFirstMatrix == 1:
self.main.numberrows, self.main.numbercolumns = self.main.matrix.shape
self.main.canvasCreator(
self.main.numberrows, self.main.numbercolumns)
self.main.canvas.pack()
self.flagFirstMatrix = 0
self.main.canvas.delete(ALL)
self.main.draw()
self.master.update_idletasks()
self.master.update()
First there is no indentation for class Agent, second for the quit button's "call back" self.frame.destroy is never defined so it doesn't do anything. If you meant tkinter destroy method try self.frame.destroy() or try explicitly defining it. You can also try calling either fram.pack_forget() or fram.grid_forget()
Add master.mainloop() to your last line in terms of the entire lines of code

How do you capture current frame from a MediaElement in WinRT (8.1)?

I am trying to implement a screenshot functionality in a WinRT app that shows Video via a MediaElement. I have the following code, it saves a screenshot that's the size of the MediaElement but the image is empty (completely black). Tried with various types of Media files. If I do a Win Key + Vol Down on Surface RT, the screen shot includes the Media frame content, but if I use the following code, it's blackness all around :(
private async Task SaveCurrentFrame()
{
RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(Player);
var pixelBuffer = await renderTargetBitmap.GetPixelsAsync();
MultimediaItem currentItem = (MultimediaItem)this.DefaultViewModel["Group"];
StorageFolder currentFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
var saveFile = await currentFolder.CreateFileAsync(currentItem.UniqueId + ".png", CreationCollisionOption.ReplaceExisting);
if (saveFile == null)
return;
// Encode the image to the selected file on disk
using (var fileStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, fileStream);
encoder.SetPixelData(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Ignore,
(uint)renderTargetBitmap.PixelWidth,
(uint)renderTargetBitmap.PixelHeight,
DisplayInformation.GetForCurrentView().LogicalDpi,
DisplayInformation.GetForCurrentView().LogicalDpi,
pixelBuffer.ToArray());
await encoder.FlushAsync();
}
}
Here MultimediaItem is my View Model class that among other things has a UniqueId property that's a string.
'Player' is the name of the Media Element.
Is there anything wrong with the code or this approach is wrong and I've to get in the trenches with C++?
P.S. I am interested in the WinRT API only.
Update 1 Looks like RenderTargetBitmap doesn't support this, the MSDN documentation clarifies it http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.xaml.media.imaging.rendertargetbitmap .
I'll appreciate any pointers on how to do it using DirectX C++. This is a major task for me so I'll crack this one way or the other and report back with the solution.
Yes, it is possible - little bit tricky, but working well.
You dont use mediaElement, but StorageFile itself.
You need to create writableBitmap with help of Windows.Media.Editing namespace.
Works in UWP (Windows 10)
This is complete example with file picking and getting video resolution and saving image to Picture Library
TimeSpan timeOfFrame = new TimeSpan(0, 0, 1);//one sec
//pick mp4 file
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeFilter.Add(".mp4");
StorageFile pickedFile = await picker.PickSingleFileAsync();
if (pickedFile == null)
{
return;
}
///
//Get video resolution
List<string> encodingPropertiesToRetrieve = new List<string>();
encodingPropertiesToRetrieve.Add("System.Video.FrameHeight");
encodingPropertiesToRetrieve.Add("System.Video.FrameWidth");
IDictionary<string, object> encodingProperties = await pickedFile.Properties.RetrievePropertiesAsync(encodingPropertiesToRetrieve);
uint frameHeight = (uint)encodingProperties["System.Video.FrameHeight"];
uint frameWidth = (uint)encodingProperties["System.Video.FrameWidth"];
///
//Use Windows.Media.Editing to get ImageStream
var clip = await MediaClip.CreateFromFileAsync(pickedFile);
var composition = new MediaComposition();
composition.Clips.Add(clip);
var imageStream = await composition.GetThumbnailAsync(timeOfFrame, (int)frameWidth, (int)frameHeight, VideoFramePrecision.NearestFrame);
///
//generate bitmap
var writableBitmap = new WriteableBitmap((int)frameWidth, (int)frameHeight);
writableBitmap.SetSource(imageStream);
//generate some random name for file in PicturesLibrary
var saveAsTarget = await KnownFolders.PicturesLibrary.CreateFileAsync("IMG" + Guid.NewGuid().ToString().Substring(0, 4) + ".jpg");
//get stream from bitmap
Stream stream = writableBitmap.PixelBuffer.AsStream();
byte[] pixels = new byte[(uint)stream.Length];
await stream.ReadAsync(pixels, 0, pixels.Length);
using (var writeStream = await saveAsTarget.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, writeStream);
encoder.SetPixelData(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)writableBitmap.PixelWidth,
(uint)writableBitmap.PixelHeight,
96,
96,
pixels);
await encoder.FlushAsync();
using (var outputStream = writeStream.GetOutputStreamAt(0))
{
await outputStream.FlushAsync();
}
}
Yeah...I spent lot of hours by this
Ok I have managed to get making snapshot from MediaElement on button press to work.
I am passing MediaStreamSource object to MediaElement using SetMediaStreamSource method. MediaStreamSource has event SampleRequested which is fired basicly everytime new frame is drawn. Then using boolean I control when to create bitmap
private async void MediaStream_SampleRequested(MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args)
{
if (!takeSnapshot)
{
return;
}
takeSnapshot = false;
Task.Run(() => DecodeAndSaveVideoFrame(args.Request.Sample));
}
After that what is left is to decode compressed image and convert it to WriteableBitmap. The image is (or at least was in my case) in YUV fromat. You can get the byte array using
byte[] yvuArray = sample.Buffer.ToArray();
and then get data from this array and convert it to RGB. Unfortunetly I cannot post entire code but I'm gonna give you a few more hints:
YUV to RGB wiki here you have wiki describing how does YUV to RGB conversion works.
Here I found python project which solution I have adapted (and works perfectly). To be more precise you have to analize how method NV12Converter works.
The last thing is to change takeSnapshot boolean to true after pressing button or doing other activity :).