extension file making image flip upside down (swift3) - swift3

The following extension file generates my image upside down. All i need to do is flip my image by 180 degrees.
case .landscapeLeft:
var transform: CGAffineTransform = CGAffineTransform.identity
transform = transform.translatedBy(x: self.size.width, y: self.size.height)
transform = transform.rotated(by: CGFloat(Double.pi/2))
guard let cgImage = self.cgImage, let colorSpace = cgImage.colorSpace, let context: CGContext = CGContext(data: nil, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: cgImage.bitsPerComponent, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue) else { return self }
context.concatenate(transform)
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
guard let transformed = context.makeImage() else { return self }
return UIImage(cgImage: transformed)
return imageResult!
I have tried to use 3 * Double.pi / 2 but that makes no image appear on the image view. The only math formula that gets a image on the image view is double.pi / 2 for my code.

Your code rotates the image about the origin of the image which happens to be the top left hand corner. You need to translate the origin to be the centre of the image before applying the rotation and then after translate the origin back to the top left corner. If you replace your transform construction with the following it should work
var transform: CGAffineTransform = CGAffineTransform.identity
transform = transform.translatedBy(x: self.size.width/2, y: self.size.height/2)
transform = transform.rotated(by: angle)
transform = transform.translatedBy(x: -self.size.width/2, y: -self.size.height/2)

Related

Ball rolling and turning effect with SpriteKit

I have created a ball node, and applied the texture images from my 3d model. I have captured totally 6 images, 3 images (with having 120deg) for rolling around x axis, and other 3 images for rolling around y axis. I want sprite kit to simulate it with following code below.When i apply impulse, it starts sliding instead rolling and when it collides to sides, then it starts turning but again not rolling. Normally, depending on the impulse on the ball, it should turn and roll together sometimes. The effect on "8 ball pool game" balls can be an example which i want to get a result.
var ball = SKSpriteNode()
var textureAtlas = SKTextureAtlas()
var textureArray = [SKTexture]()
override func didMove(to view: SKView) {
textureAtlas = SKTextureAtlas(named: "white")
for i in 0... textureAtlas.textureNames.count {
let name = "ball_\(i).png"
textureArray.append(SKTexture(imageNamed: name))
}
ball = SKSpriteNode(imageNamed: textureAtlas.textureNames[0])
ball.size = CGSize(width: ballRadius*2, height: ballRadius*2)
ball.position = CGPoint(x: -ballRadius/2-20, y: -ballRadius-20)
ball.zPosition = 0
ball.physicsBody = SKPhysicsBody(circleOfRadius: ballRadius)
ball.physicsBody?.isDynamic = true
ball.physicsBody?.restitution = 0.3
ball.physicsBody?.linearDamping = 0
ball.physicsBody?.allowsRotation = true
addChild(ball)}
You need to apply angular impulse to get it to rotate
node.physicsBody!.applyAngularImpulse(1.0)

CGImageDestinationAddImage writes image without blue elements

For some reason when gifArrayAdjusted: [CGImage] has 17 or more CGImages, gif is created correctly in full color.
But when gifArrayAdjusted: [CGImage] has 16 or less CGImages, gif is created but is missing all blue elements. Red, green and grey scales are present in gif but its like blue is 'invisible'.
Tested: when viewing gifArrayAdjusted in either case, the images have full color. Therefore I assume the issue is in the createGIF func.
Note: this has to do with timing or CPU resource somehow, due to my for in loop. If I just use my gifArray and skip emphasizing certain frames, full color is ALWAYS present.
Can anyone explain why this was happening?
{
var gifArray = [CGImage]()
// Fill [CGImage] with CoverPage
let rect = CGRect(x: 0, y: 0, width: self.view.frame.size.width, height: self.view.frame.size.height)
UIGraphicsBeginImageContextWithOptions(RVC.MainView.frame.size, false, 1.0)
#imageLiteral(resourceName: "CoverPage").draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
gifArray.append(newImage!.cgImage!)
// Fill [CGImage] with rest of images
repeat {
UIGraphicsBeginImageContext(RVC.MainView.frame.size)
RVC.MainView.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let cgImage = image.cgImage
gifArray.append(cgImage!)
RVC.ReplayForward()
} while RVC.loopSwitch == 0
// Certain images are double entered for longer showing in gif according to RVC.specialGifFrames:[Int]
var gifArrayAdjusted = [CGImage]()
for frame in 0..<RVC.specialGifFrames.count {
if RVC.specialGifFrames[frame] == 1 {
gifArrayAdjusted.append(gifArray[frame])
} else if RVC.specialGifFrames[frame] == 2 {
gifArrayAdjusted.append(gifArray[frame])
gifArrayAdjusted.append(gifArray[frame])
}
}
//Create GifData
let CFData1 = CFDataCreateMutable(kCFAllocatorDefault, 0)
let GifData = createGIF(with: gifArrayAdjusted, data: CFData1!, loopCount: 0, frameDelay: 1.1)
// Save Data
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString
let pathed = documentsPath.appendingPathComponent(path)
do {
try GifData.write(to: URL(fileURLWithPath: pathed), options: .atomic)
} catch _ {
}
}
func createGIF(with images: [CGImage], data: CFMutableData, loopCount: Int, frameDelay: Double) -> Data {
let gifDest = CGImageDestinationCreateWithData(data, kUTTypeGIF, images.count, nil)
let fileProperties = [kCGImagePropertyGIFDictionary as String: [kCGImagePropertyGIFLoopCount as String: loopCount]]
CGImageDestinationSetProperties(gifDest!, fileProperties as CFDictionary?)
let frameProperties = [(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): frameDelay]]
for img in images {
CGImageDestinationAddImage(gifDest!, img, frameProperties as CFDictionary?)
}
CGImageDestinationFinalize(gifDest!)
return data as Data
}

How to scale CAShapeLayer to different UIImageView

I have a UIImageView that you can tap on and it draws a circle. I store the location of the circles in an Array of Dictionaries. This allows me to "replay" the drawing of the circles. However, when the UIImageView is a different size from the original, the circles don't scale to the new UIImageView.
How can I get the circles to scale? For demonstration purposes, the top picture is the size of the UIImageView used for input and the second one is the size for replay.
Inputing the circles:
Replay the circles (the circles should be in the blue UIImageView
import Foundation
import UIKit
class DrawPuck {
func drawPuck(circle: CGPoint, circleColour: CGColor, circleSize: CGFloat, imageView: UIImageView) {
let circleBezierPath = UIBezierPath(arcCenter: CGPoint(x: circle.x,y: circle.y), radius: CGFloat(circleSize), startAngle: CGFloat(0), endAngle:CGFloat(M_PI * 2), clockwise: true)
let shapeLayer = CAShapeLayer()
shapeLayer.path = circleBezierPath.cgPath
//change the fill color
shapeLayer.fillColor = circleColour
//you can change the stroke color
shapeLayer.strokeColor = UIColor.white.cgColor
//you can change the line width
shapeLayer.lineWidth = 0.5
imageView.layer.addSublayer(shapeLayer)
}
}
I was able to resolve this with CATransform3DMakeScale As long as I keep the original aspect ratio of the original image it works great.
let width = yellowImageView.frame.width / blueImageView.frame.width
let height = yellowImageView.frame.height / blueImageView.frame.height
shapeLayer.transform = CATransform3DMakeScale(height, width, 1.0)
view.layer.addSublayer(shapeLayer)

MapView overlay is cutting off after zoom in

I am facing a weird problem with MKMapView. I have used a MKOverlayRenderer. Now the problem is when I am zooming out image showing correctly. But in case of zoom in, some portion of the image are cutting off. It's looking like a portion of MapView is coming above the overlay. Following is my overlay renderer code.
class MapOverlayRenderer: MKOverlayRenderer {
var overlayImage: UIImage
var plan: Plan
init(overlay: MKOverlay, overlayImage: UIImage, plan: Plan) {
self.overlayImage = overlayImage
self.plan = plan
super.init(overlay: overlay)
}
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in ctx: CGContext) {
let theMapRect = overlay.boundingMapRect
let theRect = rect(for: theMapRect)
// Rotate around top left corner
ctx.rotate(by: CGFloat(degreesToRadians(plan.bearing)));
// Draw the image
UIGraphicsPushContext(ctx)
overlayImage.draw(in: theRect, blendMode: CGBlendMode.normal, alpha: 1.0)
UIGraphicsPopContext();
}
func degreesToRadians(_ x:Double) -> Double {
return (M_PI * x / 180.0)
}
}
Though I don't know the actual reason but when I am commenting ctx.rotate(by:) function this problem is been fixed. But that's not my solution cause image has to be in position.
Please Try below.
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in ctx: CGContext) {
DispatchQueue.main.async {
let theMapRect = overlay.boundingMapRect
let theRect = rect(for: theMapRect)
// Rotate around top left corner
ctx.rotate(by: CGFloat(degreesToRadians(plan.bearing)));
// Draw the image
UIGraphicsPushContext(ctx)
overlayImage.draw(in: theRect, blendMode: CGBlendMode.normal, alpha: 1.0)
UIGraphicsPopContext();
}
}

AVCapture and zooming of previewLayer in Swift

I have a camera app which allows the user to both take pictures and record video. The iPhone is attached to a medical otoscope using an adapter, so the video that is captured is very small (about the size of a dime). I need to be able to zoom the video to fill the screen, but have not been able to figure out how to do so.
I found this answer here on SO that uses ObjC but have not had success in translating it to Swift. I am very close but am getting stuck. Here is my code for handling a UIPinchGestureRecgoznier:
#IBAction func handlePinchGesture(sender: UIPinchGestureRecognizer) {
var initialVideoZoomFactor: CGFloat = 0.0
if (sender.state == UIGestureRecognizerState.began) {
initialVideoZoomFactor = (captureDevice?.videoZoomFactor)!
} else {
let scale: CGFloat = min(max(1, initialVideoZoomFactor * sender.scale), 4)
CATransaction.begin()
CATransaction.setAnimationDuration(0.01)
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
CATransaction.commit()
if ((captureDevice?.lockForConfiguration()) != nil) {
captureDevice?.videoZoomFactor = scale
captureDevice?.unlockForConfiguration()
}
}
}
This line...
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
... gives me the error 'Cannot assign value of type 'CGAffineTransform' to type 'CGTransform3D'. I'm trying to figure this out but my attempts to fix this have been unfruitful.
Figured it out: Changed the problematic line to:
previewLayer?.setAffineTransform(CGAffineTransform(scaleX: scale, y: scale))