I'm quite new to Swift and recently noticed that you cannot inherit from a generic in Swift, e.g.
class MyClass<T> : T {}
is not valid in Swift 3 (see question this question).
Here is the problem I was hoping to solve with the above construct:
protocol Backend {
func operationA(operand: Int)
}
class ConcreteBackend : Backend {
func operationA(operand: Int) {
// ...
}
// Some other functions and/or custom initializers
// ...
}
class EnhancedBackend<T : Backend> : T {
override func operationA(operand: Int) {
// do something smart here
super.operationA(operand: modifiedOperand)
}
}
Basically EnhancedBackend does something smart with the input of operationA and then passes it to the actual implementation of Backend.
I'm using inheritance here instead of composition, because ConcreteBackend might have some public properties, functions and initializers that are not specified in the protocol (because they are only related to the concrete implementation) that I want to also expose with EnhancedBackend.
Without inheritance this would not be possible.
A C++ implementation might look like
// Using concepts here instead of protocols
class ConrecteBackend {
public:
void operationA(int operand) { .... }
}
template<class T>
class EnhancedBackend : public T {
using Base = T;
public:
// Ensure T is a model of the Backend concept
static_assert(isModelOfConceptBackend<T>::value,
"Template parameter is not a model of concept Backend");
// Ensure all constructors of Base can be used
template<class ...Args, typename = std::enable_if_t<
std::is_constructible<Base, Args...>::value>>
inline EnhancedBackend(Args &&...args) : Base(std::forward<Args>(args)...) {}
void operationA(int operand) {
// ...
Base::operationA(operand);
}
};
So with C++ it's quite simple to solve the problem. But at the moment I have no clue how to implement in with (pure) Swift 3.
Swift's generics are not the same as C++ templates which are closer to pre processing macros than Swift's type semantics.
There are however a number of ways to achieve similar results. One way if to use variables to reference functions that need customized dispatching rules:
for example:
protocol Backend:class
{
var operationA:(Int) -> () { get set }
func performOperationA(_ : Int) -> ()
}
class ConcreteBackend : Backend
{
lazy var operationA:(Int) -> () = self.performOperationA
func performOperationA(_ : Int) -> ()
{
// ...
}
// Some other functions and/or custom initializers
// ...
}
extension Backend
{
var enhancedForTesting:Self
{
operationA = testing_OperationA
return self
}
func testing_OperationA(_ operand:Int) -> ()
{
let modifiedOperand = operand + 1
performOperationA(modifiedOperand)
}
}
let enhancedBackend = ConcreteBackend().enhancedForTesting
By using a variable to reference the function's implementation, it becomes possible to dynamically change the behaviour of operationA at runtime and for specific instances.
In this example, the enhancements are added to the Backend protocol but they could also have been set by an independent function or even another class that has its own particular kind of altered behaviour.
When using this approach, the instance has all the attributes and functions of the concrete class while implementing the enhanced behaviour for the altered functions of the protocol.
Creating the enhanced instance uses a syntax that is as simple as (if not simpler than) a generic class constructor:
// for example:
let instance = ConcreteBackend(...).enhanced
// rather than:
let instance = EnhancedBackend<ConcreteBackEnd>(...)
Related
I have a configuration struct that looks like this:
struct Conf {
list: Vec<String>,
}
The implementation was internally populating the list member, but now I have decided that I want to delegate that task to another object. So I have:
trait ListBuilder {
fn build(&self, list: &mut Vec<String>);
}
struct Conf<T: Sized + ListBuilder> {
list: Vec<String>,
builder: T,
}
impl<T> Conf<T>
where
T: Sized + ListBuilder,
{
fn init(&mut self) {
self.builder.build(&mut self.list);
}
}
impl<T> Conf<T>
where
T: Sized + ListBuilder,
{
pub fn new(lb: T) -> Self {
let mut c = Conf {
list: vec![],
builder: lb,
};
c.init();
c
}
}
That seems to work fine, but now everywhere that I use Conf, I have to change it:
fn do_something(c: &Conf) {
// ...
}
becomes
fn do_something<T>(c: &Conf<T>)
where
T: ListBuilder,
{
// ...
}
Since I have many such functions, this conversion is painful, especially since most usages of the Conf class don't care about the ListBuilder - it's an implementation detail. I'm concerned that if I add another generic type to Conf, now I have to go back and add another generic parameter everywhere. Is there any way to avoid this?
I know that I could use a closure instead for the list builder, but I have the added constraint that my Conf struct needs to be Clone, and the actual builder implementation is more complex and has several functions and some state in the builder, which makes a closure approach unwieldy.
While generic types can seem to "infect" the rest of your code, that's exactly why they are beneficial! The compiler knowledge about how big and specifically what type is used allow it to make better optimization decisions.
That being said, it can be annoying! If you have a small number of types that implement your trait, you can also construct an enum of those types and delegate to the child implementations:
enum MyBuilders {
User(FromUser),
File(FromFile),
}
impl ListBuilder for MyBuilders {
fn build(&self, list: &mut Vec<String>) {
use MyBuilders::*;
match self {
User(u) => u.build(list),
File(f) => f.build(list),
}
}
}
// Support code
trait ListBuilder {
fn build(&self, list: &mut Vec<String>);
}
struct FromUser;
impl ListBuilder for FromUser {
fn build(&self, list: &mut Vec<String>) {}
}
struct FromFile;
impl ListBuilder for FromFile {
fn build(&self, list: &mut Vec<String>) {}
}
Now the concrete type would be Conf<MyBuilders>, which you can use a type alias to hide.
I've used this to good effect when I wanted to be able to inject test implementations into code during testing, but had a fixed set of implementations that were used in the production code.
The enum_dispatch crate helps construct this pattern.
You can use the trait object Box<dyn ListBuilder> to hide the type of the builder. Some of the consequences are dynamic dispatch (calls to the build method will go through a virtual function table), additional memory allocation (boxed trait object), and some restrictions on the trait ListBuilder.
trait ListBuilder {
fn build(&self, list: &mut Vec<String>);
}
struct Conf {
list: Vec<String>,
builder: Box<dyn ListBuilder>,
}
impl Conf {
fn init(&mut self) {
self.builder.build(&mut self.list);
}
}
impl Conf {
pub fn new<T: ListBuilder + 'static>(lb: T) -> Self {
let mut c = Conf {
list: vec![],
builder: Box::new(lb),
};
c.init();
c
}
}
I want to override the default structure of KeyValuePair in C#, so that I can make a KeyValuePair to accept a 'var' types.
Something like this :
List<KeyValuePair<string, var>> kvpList = new List<KeyValuePair<string, var>>()
{
new KeyValuePair<string, var>("Key1", 000),
new KeyValuePair<string, var>("Key2", "value2"),
new KeyValuePair<string, var>("Key3", 25.45),
};
Even if its possible for dictionary, then also it will solve my problem.
You could use object as your type, and then cast to/from object to desired outcomes. However, it's important to note that this is very much the opposite of object oriented programming, and generally indicates an error in your design and architecture.
Hmm I am wondering if this might help you: To have a list as you want, it is really possible BUT the "var" type (as you named it) must be the same for all KeyValuePair instances. For having whatever type you must use object or dynamic (use Haney's answer).
So considering that you want a single type for all KeyValuePair instances, here is a solution:
Firstly, create this helper class:
public static class KeyValuePairExtentions
{
public static List<KeyValuePair<string, T>> GetNewListOfType<T>(Expression<Func<T>> type)
{
return new List<KeyValuePair<string, T>>();
}
public static void AddNewKeyValuePair<T>(this List<KeyValuePair<string, T>> #this, string key, T element)
{
#this.Add(new KeyValuePair<string, T>(key, element));
}
}
To consume these functions, here is an example:
var lst = KeyValuePairExtentions.GetNewListOfType(() => new {Id = default (int), Name = default (string)});
lst.AddNewKeyValuePair("test1", new {Id = 3, Name = "Keith"});
The ideea is to rely on the powerfull type inference feature that we have in C#.
Some notes:
1) if T is anonymous and you create a new instance of a list in an assembly and consume it in another assembly it is VERY possible that this will NOT work due to the fact that an anonymous type is compiled per assembly (in other words, if you have a variable var x = new { X = 3 } in an assembly and in another var y = new { X = 3 } then x.GetType () != y.GeTType () but in the same assembly types are the same.)
2) If you are wondering whether an instance it's created or not by calling GetNewListOfType, the answer is NO because it is an expression tree function and the function is not even compiled. Even with a Func will work because I am not calling the function in my code. I am using the function just for type inference.
I have a generic class for making and processing JSON API requests. I pass in the TParam and TResult template parameters but when I use a derived type it's implementation is not being called.
Here is some code you can throw in a playground to illustrate:
import Cocoa
// Base class for parameters to POST to service
class APIParams {
func getData() -> Dictionary<String, AnyObject> {
return Dictionary<String, AnyObject>()
}
}
// Base class for parsing a JSON Response
class APIResult {
func parseData(data: AnyObject?) {
}
}
// Derived example for a login service
class DerivedAPIParams: APIParams {
var user = "some#one.com"
var pass = "secret"
// THIS METHOD IS CALLED CORRECTLY
override func getData() -> Dictionary<String, AnyObject> {
return [ "user": user, "pass": pass ]
}
}
// Derived example for parsing a login response
class DerivedAPIResult: APIResult {
var success = false
var token:String? = ""
// THIS METHOD IS NEVER CALLED
override func parseData(data: AnyObject?) {
/*
self.success = data!.valueForKey("success") as Bool
self.token = data!.valueForKey("token") as? String
*/
self.success = true
self.token = "1234"
}
}
class APIOperation<TParams: APIParams, TResult: APIResult> {
var url = "http://localhost:3000"
func request(params: TParams, done: (NSError?, TResult?) -> ()) {
let paramData = params.getData()
// ... snip making a request to website ...
let result = self.parseResult(nil)
done(nil, result)
}
func parseResult(data: AnyObject?) -> TResult {
var result = TResult.self()
// This should call the derived implementation if passed, right?
result.parseData(data)
return result
}
}
let derivedOp = APIOperation<DerivedAPIParams, DerivedAPIResult>()
let params = DerivedAPIParams()
derivedOp.request(params) {(error, result) in
if result? {
result!.success
}
}
The really weird thing is that only the DerivedAPIResult.parseData() is not called, whereas the DerivedAPIParams.getData() method is called. Any ideas why?
UPDATE: This defect is fixed with XCode 6.3 beta1 (Apple Swift version 1.2 (swiftlang-602.0.37.3 clang-602.0.37))
Added info for a workaround when using XCode 6.1 (Swift 1.1)
See these dev forum threads for details:
https://devforums.apple.com/thread/251920?tstart=30
https://devforums.apple.com/message/1058033#1058033
In a very similar code sample I was having the exact same issue. After waiting through beta after beta for a "fix", I did more digging and discovered that I can get the expect results by making the base class init() required.
By way of example, here is Matt Gibson's reduced example "fixed" by adding the proper init() to ApiResult
// Base class for parsing a JSON Response
class APIResult {
// adding required init() to base class yields the expected behavior
required init() {}
}
// Derived example for parsing a login response
class DerivedAPIResult: APIResult {
}
class APIOperation<TResult: APIResult> {
init() {
// EDIT: workaround for xcode 6.1, tricking the compiler to do what we want here
let tResultClass : TResult.Type = TResult.self
var test = tResultClass()
// should be able to just do, but it is broken and acknowledged as such by Apple
// var test = TResult()
println(test.self) // now shows that we get DerivedAPIResult
}
}
// Templated creation creates APIResult
let derivedOp = APIOperation<DerivedAPIResult>()
I do not know why this works. If I get time I will dig deeper, but my best guess is that for some reason having required init is causing different object allocation/construction code to be generated that forces proper set up of the vtable we are hoping for.
Looks possibly surprising, certainly. I've reduced your case to something rather simpler, which might help to figure out what's going on:
// Base class for parsing a JSON Response
class APIResult {
}
// Derived example for parsing a login response
class DerivedAPIResult: APIResult {
}
class APIOperation<TResult: APIResult> {
init() {
var test = TResult()
println(test.self) // Shows that we get APIResult, not DerivedAPIResult
}
}
// Templated creation creates APIResult
let derivedOp = APIOperation<DerivedAPIResult>()
...so it seems that creating a new instance of a templated class with a type constraint gives you an instance of the constraint class, rather than the derived class you use to instantiate the specific template instance.
Now, I'd say that the generics in Swift, looking through the Swift book, would probably prefer you not to create your own instances of derived template constraint classes within the template code, but instead just define places to hold instances that are then passed in. By which I mean that this works:
// Base class for parsing a JSON Response
class APIResult {
}
// Derived example for parsing a login response
class DerivedAPIResult: APIResult {
}
class APIOperation<T: APIResult> {
var instance: T
init(instance: T) {
self.instance = instance
println(instance.self) // As you'd expect, this is a DerivedAPIResult
}
}
let derivedOpWithPassedInstance = APIOperation<DerivedAPIResult>(instance: DerivedAPIResult())
...but I'm not clear whether what you're trying should technically be allowed or not.
My guess is that the way generics are implemented means that there's not enough type information when creating the template to create objects of the derived type from "nothing" within the template—so you'd have to create them in your code, which knows about the derived type it wants to use, and pass them in, to be held by templated constrained types.
parseData needs to be defined as a class func which creates an instance of itself, assigns whatever instance properties, and then returns that instance. Basically, it needs to be a factory method. Calling .self() on the type is just accessing the type as a value, not an instance. I'm surprised you don't get some kind of error calling an instance method on a type.
Say we have a base fabric element interface:
class BaseFabricElenent {
public:
BaseFabricElenent(){}
virtual ~BaseFabricElenent(){}
virtual void action(){}
};
We have an enumeration:
enum TypeCode {
TypeCodeLive = 10,
TypeCodeDie = 100
};
And we have implementations for our TypeCodes.
We want to get a fabric that would return desired type by TypeCode as BaseFabricElenent* as normal fabric would do.
How to add types to fabric via preprocessor define?
say:
class LiveFabricElenent: pulic BaseFabricElenent {
public:
LiveFabricElenent() :
BaseFabricElenent(){}
virtual ~LiveFabricElenent(){}
virtual void action(){}
};
ADD_TO_FABRIC(LiveFabricElenent);
Update:
Found this helpfull article on registration of types into factory on global initialization phase
. Creating a Define that would generate stub classes for types registring is all that left.
I think you don't need a macro to achieve your purpose. If you must use the enum, do something like this:
class Fabric {
public:
BaseFabricElement* createElement(TypeCode typeCode) {
switch (typeCode) {
case TypeCodeLive: return new LiveFabricElement();
case TypeCodeDead: return new DeadFabricElement();
// ... other cases ...
default: return NULL;
}
}
};
If the creation process does not depend on Fabric state, then the createElement method can be static. I would also consider returning a smart pointer instead of a raw one, and renaming Fabric to Factory.
I want to create a function object, which also has some properties held on it. For example in JavaScript I would do:
var f = function() { }
f.someValue = 3;
Now in TypeScript I can describe the type of this as:
var f: { (): any; someValue: number; };
However I can't actually build it, without requiring a cast. Such as:
var f: { (): any; someValue: number; } =
<{ (): any; someValue: number; }>(
function() { }
);
f.someValue = 3;
How would you build this without a cast?
Update: This answer was the best solution in earlier versions of TypeScript, but there are better options available in newer versions (see other answers).
The accepted answer works and might be required in some situations, but have the downside of providing no type safety for building up the object. This technique will at least throw a type error if you attempt to add an undefined property.
interface F { (): any; someValue: number; }
var f = <F>function () { }
f.someValue = 3
// type error
f.notDeclard = 3
This is easily achievable now (typescript 2.x) with Object.assign(target, source)
example:
The magic here is that Object.assign<T, U>(t: T, u: U) is typed to return the intersection T & U.
Enforcing that this resolves to a known interface is also straight-forward. For example:
interface Foo {
(a: number, b: string): string[];
foo: string;
}
let method: Foo = Object.assign(
(a: number, b: string) => { return a * a; },
{ foo: 10 }
);
which errors due to incompatible typing:
Error: foo:number not assignable to foo:string
Error: number not assignable to string[] (return type)
caveat: you may need to polyfill Object.assign if targeting older browsers.
TypeScript is designed to handle this case through declaration merging:
you may also be familiar with JavaScript practice of creating a function and then extending the function further by adding properties onto the function. TypeScript uses declaration merging to build up definitions like this in a type-safe way.
Declaration merging lets us say that something is both a function and a namespace (internal module):
function f() { }
namespace f {
export var someValue = 3;
}
This preserves typing and lets us write both f() and f.someValue. When writing a .d.ts file for existing JavaScript code, use declare:
declare function f(): void;
declare namespace f {
export var someValue: number;
}
Adding properties to functions is often a confusing or unexpected pattern in TypeScript, so try to avoid it, but it can be necessary when using or converting older JS code. This is one of the only times it would be appropriate to mix internal modules (namespaces) with external.
So if the requirement is to simply build and assign that function to "f" without a cast, here is a possible solution:
var f: { (): any; someValue: number; };
f = (() => {
var _f : any = function () { };
_f.someValue = 3;
return _f;
})();
Essentially, it uses a self executing function literal to "construct" an object that will match that signature before the assignment is done. The only weirdness is that the inner declaration of the function needs to be of type 'any', otherwise the compiler cries that you're assigning to a property which does not exist on the object yet.
EDIT: Simplified the code a bit.
Old question, but for versions of TypeScript starting with 3.1, you can simply do the property assignment as you would in plain JS, as long as you use a function declaration or the const keyword for your variable:
function f () {}
f.someValue = 3; // fine
const g = function () {};
g.someValue = 3; // also fine
var h = function () {};
h.someValue = 3; // Error: "Property 'someValue' does not exist on type '() => void'"
Reference and online example.
As a shortcut, you can dynamically assign the object value using the ['property'] accessor:
var f = function() { }
f['someValue'] = 3;
This bypasses the type checking. However, it is pretty safe because you have to intentionally access the property the same way:
var val = f.someValue; // This won't work
var val = f['someValue']; // Yeah, I meant to do that
However, if you really want the type checking for the property value, this won't work.
I can't say that it's very straightforward but it's definitely possible:
interface Optional {
<T>(value?: T): OptionalMonad<T>;
empty(): OptionalMonad<any>;
}
const Optional = (<T>(value?: T) => OptionalCreator(value)) as Optional;
Optional.empty = () => OptionalCreator();
if you got curious this is from a gist of mine with the TypeScript/JavaScript version of Optional
An updated answer: since the addition of intersection types via &, it is possible to "merge" two inferred types on the fly.
Here's a general helper that reads the properties of some object from and copies them over an object onto. It returns the same object onto but with a new type that includes both sets of properties, so correctly describing the runtime behaviour:
function merge<T1, T2>(onto: T1, from: T2): T1 & T2 {
Object.keys(from).forEach(key => onto[key] = from[key]);
return onto as T1 & T2;
}
This low-level helper does still perform a type-assertion, but it is type-safe by design. With this helper in place, we have an operator that we can use to solve the OP's problem with full type safety:
interface Foo {
(message: string): void;
bar(count: number): void;
}
const foo: Foo = merge(
(message: string) => console.log(`message is ${message}`), {
bar(count: number) {
console.log(`bar was passed ${count}`)
}
}
);
Click here to try it out in the TypeScript Playground. Note that we have constrained foo to be of type Foo, so the result of merge has to be a complete Foo. So if you rename bar to bad then you get a type error.
NB There is still one type hole here, however. TypeScript doesn't provide a way to constrain a type parameter to be "not a function". So you could get confused and pass your function as the second argument to merge, and that wouldn't work. So until this can be declared, we have to catch it at runtime:
function merge<T1, T2>(onto: T1, from: T2): T1 & T2 {
if (typeof from !== "object" || from instanceof Array) {
throw new Error("merge: 'from' must be an ordinary object");
}
Object.keys(from).forEach(key => onto[key] = from[key]);
return onto as T1 & T2;
}
This departs from strong typing, but you can do
var f: any = function() { }
f.someValue = 3;
if you are trying to get around oppressive strong typing like I was when I found this question. Sadly this is a case TypeScript fails on perfectly valid JavaScript so you have to you tell TypeScript to back off.
"You JavaScript is perfectly valid TypeScript" evaluates to false. (Note: using 0.95)