For some reason when gifArrayAdjusted: [CGImage] has 17 or more CGImages, gif is created correctly in full color.
But when gifArrayAdjusted: [CGImage] has 16 or less CGImages, gif is created but is missing all blue elements. Red, green and grey scales are present in gif but its like blue is 'invisible'.
Tested: when viewing gifArrayAdjusted in either case, the images have full color. Therefore I assume the issue is in the createGIF func.
Note: this has to do with timing or CPU resource somehow, due to my for in loop. If I just use my gifArray and skip emphasizing certain frames, full color is ALWAYS present.
Can anyone explain why this was happening?
{
var gifArray = [CGImage]()
// Fill [CGImage] with CoverPage
let rect = CGRect(x: 0, y: 0, width: self.view.frame.size.width, height: self.view.frame.size.height)
UIGraphicsBeginImageContextWithOptions(RVC.MainView.frame.size, false, 1.0)
#imageLiteral(resourceName: "CoverPage").draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
gifArray.append(newImage!.cgImage!)
// Fill [CGImage] with rest of images
repeat {
UIGraphicsBeginImageContext(RVC.MainView.frame.size)
RVC.MainView.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let cgImage = image.cgImage
gifArray.append(cgImage!)
RVC.ReplayForward()
} while RVC.loopSwitch == 0
// Certain images are double entered for longer showing in gif according to RVC.specialGifFrames:[Int]
var gifArrayAdjusted = [CGImage]()
for frame in 0..<RVC.specialGifFrames.count {
if RVC.specialGifFrames[frame] == 1 {
gifArrayAdjusted.append(gifArray[frame])
} else if RVC.specialGifFrames[frame] == 2 {
gifArrayAdjusted.append(gifArray[frame])
gifArrayAdjusted.append(gifArray[frame])
}
}
//Create GifData
let CFData1 = CFDataCreateMutable(kCFAllocatorDefault, 0)
let GifData = createGIF(with: gifArrayAdjusted, data: CFData1!, loopCount: 0, frameDelay: 1.1)
// Save Data
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString
let pathed = documentsPath.appendingPathComponent(path)
do {
try GifData.write(to: URL(fileURLWithPath: pathed), options: .atomic)
} catch _ {
}
}
func createGIF(with images: [CGImage], data: CFMutableData, loopCount: Int, frameDelay: Double) -> Data {
let gifDest = CGImageDestinationCreateWithData(data, kUTTypeGIF, images.count, nil)
let fileProperties = [kCGImagePropertyGIFDictionary as String: [kCGImagePropertyGIFLoopCount as String: loopCount]]
CGImageDestinationSetProperties(gifDest!, fileProperties as CFDictionary?)
let frameProperties = [(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): frameDelay]]
for img in images {
CGImageDestinationAddImage(gifDest!, img, frameProperties as CFDictionary?)
}
CGImageDestinationFinalize(gifDest!)
return data as Data
}
Related
The following extension file generates my image upside down. All i need to do is flip my image by 180 degrees.
case .landscapeLeft:
var transform: CGAffineTransform = CGAffineTransform.identity
transform = transform.translatedBy(x: self.size.width, y: self.size.height)
transform = transform.rotated(by: CGFloat(Double.pi/2))
guard let cgImage = self.cgImage, let colorSpace = cgImage.colorSpace, let context: CGContext = CGContext(data: nil, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: cgImage.bitsPerComponent, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue) else { return self }
context.concatenate(transform)
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
guard let transformed = context.makeImage() else { return self }
return UIImage(cgImage: transformed)
return imageResult!
I have tried to use 3 * Double.pi / 2 but that makes no image appear on the image view. The only math formula that gets a image on the image view is double.pi / 2 for my code.
Your code rotates the image about the origin of the image which happens to be the top left hand corner. You need to translate the origin to be the centre of the image before applying the rotation and then after translate the origin back to the top left corner. If you replace your transform construction with the following it should work
var transform: CGAffineTransform = CGAffineTransform.identity
transform = transform.translatedBy(x: self.size.width/2, y: self.size.height/2)
transform = transform.rotated(by: angle)
transform = transform.translatedBy(x: -self.size.width/2, y: -self.size.height/2)
I have a UIImageView that you can tap on and it draws a circle. I store the location of the circles in an Array of Dictionaries. This allows me to "replay" the drawing of the circles. However, when the UIImageView is a different size from the original, the circles don't scale to the new UIImageView.
How can I get the circles to scale? For demonstration purposes, the top picture is the size of the UIImageView used for input and the second one is the size for replay.
Inputing the circles:
Replay the circles (the circles should be in the blue UIImageView
import Foundation
import UIKit
class DrawPuck {
func drawPuck(circle: CGPoint, circleColour: CGColor, circleSize: CGFloat, imageView: UIImageView) {
let circleBezierPath = UIBezierPath(arcCenter: CGPoint(x: circle.x,y: circle.y), radius: CGFloat(circleSize), startAngle: CGFloat(0), endAngle:CGFloat(M_PI * 2), clockwise: true)
let shapeLayer = CAShapeLayer()
shapeLayer.path = circleBezierPath.cgPath
//change the fill color
shapeLayer.fillColor = circleColour
//you can change the stroke color
shapeLayer.strokeColor = UIColor.white.cgColor
//you can change the line width
shapeLayer.lineWidth = 0.5
imageView.layer.addSublayer(shapeLayer)
}
}
I was able to resolve this with CATransform3DMakeScale As long as I keep the original aspect ratio of the original image it works great.
let width = yellowImageView.frame.width / blueImageView.frame.width
let height = yellowImageView.frame.height / blueImageView.frame.height
shapeLayer.transform = CATransform3DMakeScale(height, width, 1.0)
view.layer.addSublayer(shapeLayer)
I am facing a weird problem with MKMapView. I have used a MKOverlayRenderer. Now the problem is when I am zooming out image showing correctly. But in case of zoom in, some portion of the image are cutting off. It's looking like a portion of MapView is coming above the overlay. Following is my overlay renderer code.
class MapOverlayRenderer: MKOverlayRenderer {
var overlayImage: UIImage
var plan: Plan
init(overlay: MKOverlay, overlayImage: UIImage, plan: Plan) {
self.overlayImage = overlayImage
self.plan = plan
super.init(overlay: overlay)
}
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in ctx: CGContext) {
let theMapRect = overlay.boundingMapRect
let theRect = rect(for: theMapRect)
// Rotate around top left corner
ctx.rotate(by: CGFloat(degreesToRadians(plan.bearing)));
// Draw the image
UIGraphicsPushContext(ctx)
overlayImage.draw(in: theRect, blendMode: CGBlendMode.normal, alpha: 1.0)
UIGraphicsPopContext();
}
func degreesToRadians(_ x:Double) -> Double {
return (M_PI * x / 180.0)
}
}
Though I don't know the actual reason but when I am commenting ctx.rotate(by:) function this problem is been fixed. But that's not my solution cause image has to be in position.
Please Try below.
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in ctx: CGContext) {
DispatchQueue.main.async {
let theMapRect = overlay.boundingMapRect
let theRect = rect(for: theMapRect)
// Rotate around top left corner
ctx.rotate(by: CGFloat(degreesToRadians(plan.bearing)));
// Draw the image
UIGraphicsPushContext(ctx)
overlayImage.draw(in: theRect, blendMode: CGBlendMode.normal, alpha: 1.0)
UIGraphicsPopContext();
}
}
I have a camera app which allows the user to both take pictures and record video. The iPhone is attached to a medical otoscope using an adapter, so the video that is captured is very small (about the size of a dime). I need to be able to zoom the video to fill the screen, but have not been able to figure out how to do so.
I found this answer here on SO that uses ObjC but have not had success in translating it to Swift. I am very close but am getting stuck. Here is my code for handling a UIPinchGestureRecgoznier:
#IBAction func handlePinchGesture(sender: UIPinchGestureRecognizer) {
var initialVideoZoomFactor: CGFloat = 0.0
if (sender.state == UIGestureRecognizerState.began) {
initialVideoZoomFactor = (captureDevice?.videoZoomFactor)!
} else {
let scale: CGFloat = min(max(1, initialVideoZoomFactor * sender.scale), 4)
CATransaction.begin()
CATransaction.setAnimationDuration(0.01)
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
CATransaction.commit()
if ((captureDevice?.lockForConfiguration()) != nil) {
captureDevice?.videoZoomFactor = scale
captureDevice?.unlockForConfiguration()
}
}
}
This line...
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
... gives me the error 'Cannot assign value of type 'CGAffineTransform' to type 'CGTransform3D'. I'm trying to figure this out but my attempts to fix this have been unfruitful.
Figured it out: Changed the problematic line to:
previewLayer?.setAffineTransform(CGAffineTransform(scaleX: scale, y: scale))
I am looking for a more efficient way to constrain/set text in Raphael.
I have text that can be written in a box. That text should be centered (based on a number that could change if user wanted to shift text left/right) and the text cannot go beyond the boundaries of the paper.
This is what I do now and its not manageable performance wise
// Build a path
var path = this.paper.print(x, x, text, font, size, 'middle')
// Center line by getting bounding box and shifting to half of that
var bb = path.getBBox()
path.transform('...T' + [-bb.width / 2, 0])
// Compare paper size vs bb
// if it goes beyond I adjust X and Y accordingly and redo the above
So ideally I would like to predict the size of the text before it prints - I am not sure this is possible though as it is probably font dependent. I have looked for a command to contrain text but do not see one?
The other thought I had was to create some kind of shadow paper that does not print to screen and use that to determine size before I render to user. I am not sure where the lag is though - if it's in the screen rendering good but if its in the general logic of creating the svg then that wont help.
I'd appreciate suggestions
You can use opentype.js for measuring text, also you can get svg path by using Path.toPathData method. There is no need of cufon compiled js fonts.
For text measure, create a canvas element somewhere in your DOM.
<canvas id="canvas" style="display: none;"></canvas>
this function will load text and count width, height
function measureText(text, font_name, size) {
var dfd = $.Deferred();
opentype.load('/fonts/' + font_name + '.ttf', function (err, font) {
if (err) {
dfd.reject('Font is not loaded');
console.log('Could not load font: ' + err);
} else {
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
var tmp_path = font.getPath(text, 100, 100, size);
tmp_path.draw(ctx); // Draw path on the canvas
var ascent = 0;
var descent = 0;
var width = 0;
var scale = 1 / font.unitsPerEm * size;
var glyphs = font.stringToGlyphs(text_string);
for (var i = 0; i < glyphs.length; i++) {
var glyph = glyphs[i];
if (glyph.advanceWidth) {
width += glyph.advanceWidth * scale;
}
if (i < glyphs.length - 1) {
var kerningValue = font.getKerningValue(glyph, glyphs[i + 1]);
width += kerningValue * scale;
}
ascent = Math.max(ascent, glyph.yMax);
descent = Math.min(descent, glyph.yMin);
}
return dfd.resolve({
width: width,
height: ascent * scale,
actualBoundingBoxDescent: descent * scale,
fontBoundingBoxAscent: font.ascender * scale,
fontBoundingBoxDescent: font.descender * scale
});
}
});
return dfd.promise();
},
then use this function to get text dimensions:
$.when(measureText("ABCD", "arial", 48)).done(function(res) {
var height = res.height,
width = res.width;
...
});