A UIFont can easily be setup via the font descriptor, e.g.
let settings: [UIFontDescriptor.FeatureKey: Int] = [
.featureIdentifier: kStylisticAlternativesType,
.typeIdentifier: 2]
let descriptor = someBaseFont.fontDescriptor.addingAttributes([.featureSettings: [settings]])
let newFont = UIFont(descriptor: descriptor, size: size)
(How) is it possible to achieve the same in SwiftUI?
No direct way for now, but here is a solution:
// .. your above code
let newFont = UIFont(descriptor: descriptor, size: size)
let myFont = Font(newFont as CTFont) // << here !!
Related
I am new to rust and I am trying to port golang code that I had written previosuly. The go code basically downloaded files from s3 and directly (without writing to disk) ungziped the files and parsed them.
Currently the only solution I found is to save the gzipped files on disk then ungzip and parse them.
Perfect pipeline would be to directly ungzip and parse them.
How can I accomplish this?
const ENV_CRED_KEY_ID: &str = "KEY_ID";
const ENV_CRED_KEY_SECRET: &str = "KEY_SECRET";
const BUCKET_NAME: &str = "bucketname";
const REGION: &str = "us-east-1";
use anyhow::{anyhow, bail, Context, Result}; // (xp) (thiserror in prod)
use aws_sdk_s3::{config, ByteStream, Client, Credentials, Region};
use std::env;
use std::io::{Write};
use tokio_stream::StreamExt;
#[tokio::main]
async fn main() -> Result<()> {
let client = get_aws_client(REGION)?;
let keys = list_keys(&client, BUCKET_NAME, "CELLDATA/year=2022/month=06/day=06/").await?;
println!("List:\n{}", keys.join("\n"));
let dir = Path::new("input/");
let key: &str = &keys[0];
download_file_bytes(&client, BUCKET_NAME, key, dir).await?;
println!("Downloaded {key} in directory {}", dir.display());
Ok(())
}
async fn download_file_bytes(client: &Client, bucket_name: &str, key: &str, dir: &Path) -> Result<()> {
// VALIDATE
if !dir.is_dir() {
bail!("Path {} is not a directory", dir.display());
}
// create file path and parent dir(s)
let mut file_path = dir.join(key);
let parent_dir = file_path
.parent()
.ok_or_else(|| anyhow!("Invalid parent dir for {:?}", file_path))?;
if !parent_dir.exists() {
create_dir_all(parent_dir)?;
}
file_path.set_extension("json");
// BUILD - aws request
let req = client.get_object().bucket(bucket_name).key(key);
// EXECUTE
let res = req.send().await?;
// STREAM result to file
let mut data: ByteStream = res.body;
let file = File::create(&file_path)?;
let Some(bytes)= data.try_next().await?;
let mut gzD = GzDecoder::new(&bytes);
let mut buf_writer = BufWriter::new( file);
while let Some(bytes) = data.try_next().await? {
buf_writer.write(&bytes)?;
}
buf_writer.flush()?;
Ok(())
}
fn get_aws_client(region: &str) -> Result<Client> {
// get the id/secret from env
let key_id = env::var(ENV_CRED_KEY_ID).context("Missing S3_KEY_ID")?;
let key_secret = env::var(ENV_CRED_KEY_SECRET).context("Missing S3_KEY_SECRET")?;
// build the aws cred
let cred = Credentials::new(key_id, key_secret, None, None, "loaded-from-custom-env");
// build the aws client
let region = Region::new(region.to_string());
let conf_builder = config::Builder::new().region(region).credentials_provider(cred);
let conf = conf_builder.build();
// build aws client
let client = Client::from_conf(conf);
Ok(client)
}
Your snippet doesn't tell where GzDecoder comes from, but I'll assume it's flate2::read::GzDecoder.
flate2::read::GzDecoder is already built in a way that it can wrap anything that implements std::io::Read:
GzDecoder::new expects an argument that implements Read => deflated data in
GzDecoder itself implements Read => inflated data out
Therefore, you can use it just like a BufReader: Wrap your reader and used the wrapped value in place:
use flate2::read::GzDecoder;
use std::fs::File;
use std::io::BufReader;
use std::io::Cursor;
fn main() {
let data = [0, 1, 2, 3];
// Something that implements `std::io::Read`
let c = Cursor::new(data);
// A dummy output
let mut out_file = File::create("/tmp/out").unwrap();
// Using the raw data would look like this:
// std::io::copy(&mut c, &mut out_file).unwrap();
// To inflate on the fly, "pipe" the data through the decoder, i.e. wrap the reader
let mut stream = GzDecoder::new(c);
// Consume the `Read`er somehow
std::io::copy(&mut stream, &mut out_file).unwrap();
}
playground
You don't mention what "and parse them" entails, but the same concept applies: If your parser can read from an impl Read (e.g. it can read from a std::fs::File), then it can also read directly from a GzDecoder.
I am working on an off-screen render program and I use crate glium to do this. I have followed the example of screenshot.rs and this example worked well.
Then I made some change:
The orignal code was
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let wb = glutin::WindowBuilder::new().with_visible(true);
let cb = glutin::ContextBuilder::new();
let display = glium::Display::new(wb, cb, &event_loop).unwrap();
// building the vertex buffer, which contains all the vertices that we will draw
I grouped these codes into a function:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::Display {
let version = parse_version(); //this will return `OpenGL 3.3`
let wb = glutin::WindowBuilder::new()
.with_visibility(false)
.with_dimensions(glutin::dpi::LogicalSize::from(size));
let cb = glutin::ContextBuilder::new()
.with_gl(version);
glium::Display::new(wb, cb, &event_loop).unwrap()
}
After this modification, the program still worked well. So I continued to add the headless-context:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display_headless((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display_headless(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::HeadlessRenderer {
let version = parse_version(); // this will return `OpenGL 3.3`
let ctx = glutin::ContextBuilder::new()
.with_gl(version)
.build_headless(&event_loop, glutin::dpi::PhysicalSize::from(size))
.expect("1");
//let ctx = unsafe { ctx.make_current().expect("3") };
glium::HeadlessRenderer::new(ctx).expect("4")
}
But this time, the program did not work. There was no panic during the running, but the output image was empty with black, and its size was not 128x128 but 800x600.
I have tried to remove the libEGL.dll so that, due to the doc of crate glutin, the function .build_headless will build a window and hide it, just as my function build_display does. However, this failed too. So what can cause this?
Loading a .dae file as a scene element
This code works, loading the file as the scene:
let scene = SCNScene(named: "art.scnassets/base-wall-tile_sample.dae")!
This code, loading the file as SCNGeometry, doesn't:
let url = Bundle.main.url(forResource: "art.scnassets/base-wall-tile_sample", withExtension: "dae")
let source = SCNSceneSource(url: url! )
let geo = source!.entryWithIdentifier("Geo", withClass: SCNGeometry.self)!
url and source are ok, but it crashes trying to produce geo. Bad instruction.
This code, like several examples offered on the web, was in Swift 2 (load a collada (dae) file into SCNNode (Swift - SceneKit). I had to juggle it to Swift 3, and something seems to have been lost in translation. Can someone tell me how to do this stuff right?
A .dae file is always loaded as a SCNScene. You need to name the node containing the geometry you want to add.
Than you can load the scene, filter it for the node with the given name and add it to your scene.
func addNode(named nodeName, fromSceneNamed: sceneName, to scene: SCNScene) {
if let loadedScene = SCNScene(named: sceneName),
let node = loadedScene.rootNode.childNode(withName: nodeName, recursivly: true) {
scene.rootNode.addChildNode(node)
}
}
guard let shipScene = SCNScene(named: "ship.dae") else { return }
let shipNode = SCNNode()
let shipSceneChildNodes = shipScene.rootNode.childNodes
for childNode in shipSceneChildNodes {
shipNode.addChildNode(childNode)
}
node.addChildNode(shipNode)
This question already has answers here:
How to cast self to UnsafeMutablePointer<Void> type in swift
(4 answers)
Closed 5 years ago.
Foundation is chock full of functions that take an opaque void *info then later vend it back. In pre-ARC Objective C days, you could retain an object, supply it, then when it was handed back to your callback release it.
For example,
CGDataProviderRef CGDataProviderCreateWithData(void *info, const void *data, size_t size, CGDataProviderReleaseDataCallback releaseData);
typedef void (*CGDataProviderReleaseDataCallback)(void *info, const void *data, size_t size);
In this case, you could supply a retained object in info, then release it in the callback (after appropriate casting).
How would I do this in Swift?
With assistance from Quinn 'The Eskimo' at Apple I found out how to do this. Given an object:
let pixelBuffer: CVPixelBuffer
get a pointer:
Get an unmanaged object after retaining it:
let fooU: Unmanaged = Unmanaged.passRetained(pixelBuffer)
Convert it to a raw pointer
let foo: UnsafeMutableRawPointer = fooU.toOpaque()
Recover the object while releasing it:
Convert the raw pointer to an unmanaged typed object
let ptr: Unmanaged<CVPixelBuffer> = Unmanaged.fromOpaque(pixelPtr)
Recover the actual object while releasing it
let pixelBuffer: CVPixelBuffer = ptr.takeRetainedValue()
The following code has been tested in an app. Note without Apple's help I'd never have figured this out thus the Q & A! Hope it helps someone!
Also, note the use of #convention(c), something I'd never seen before!
let fooU: Unmanaged = Unmanaged.passRetained(pixelBuffer)
let foo: UnsafeMutableRawPointer = fooU.toOpaque()
/* Either "bar" works */
/* let bar: #convention(c) (UnsafeMutableRawPointer?, UnsafeRawPointer, Int) -> Swift.Void = { */
let bar: CGDataProviderReleaseDataCallback = {
(_ pixelPtr: UnsafeMutableRawPointer?, _ data: UnsafeRawPointer, _ size: Int) in
if let pixelPtr = pixelPtr {
let ptr: Unmanaged<CVPixelBuffer> = Unmanaged.fromOpaque(pixelPtr)
let pixelBuffer: CVPixelBuffer = ptr.takeRetainedValue()
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
DispatchQueue.main.async {
print("UNLOCKED IT!")
}
}
}
let val: CVReturn = CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
if val == kCVReturnSuccess,
let sourceBaseAddr = CVPixelBufferGetBaseAddress(pixelBuffer),
let provider = CGDataProvider(dataInfo: foo, data: sourceBaseAddr, size: sourceRowBytes * height, releaseData: bar)
{
let colorspace = CGColorSpaceCreateDeviceRGB()
let image = CGImage(width: width, height: height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: sourceRowBytes,
space: colorspace, bitmapInfo: bitmapInfo, provider: provider, decode: nil,
shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
/* CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0)) */
return image
} else {
return nil
}
Quinn recently updated the Apple Forum thread on this, stating that somehow this technique never made it into either of the two Apple Swift Documents, and that he just entered a rdar to get it added. So you won't find this info anywhere else (well, at least now!)
I want to match the size of "yellow_noise.png" to the size of the "pickedImage" from the UIImagePickerController. What should I do?
originalImageView.image = pickedImage
overayImageView.image = UIImage(named:"yellow_noise.png")
Maybe you could do something like this:
let overlayImageView = UIImageView(frame: originalImageView.frame)
overlayImageView.image = UIImage(named:"yellow_noise.png")