The signature of reduce is
reduce<U>(callbackfn: (previousValue: U, currentValue: T, currentIndex: number, array: T[]) => U, initialValue: U): U
(T is the type parameter of the array itself.) So, faced with the code
[fooRes, barRes].reduce(flatten, {})
the type checker's job is to figure out what U is. Let's walk through its reasoning:
fooRes : IFoo and barRes : IBar, so [fooRes, barRes] : (IFoo | IBar)[]
- Thus, the array's
T ~ IFoo | IBar
- So
flatten is being called with its T parameter set to IFoo | IBar
flatten's return type (K & T) is therefore K & (IFoo | IBar)
- Since
flatten's return type must be assignable to U, that gives us the constraint U >= (U & (IFoo | IBar)), which simplifies to U >= (IFoo | IBar)
- The other bit of evidence is the
initialValue parameter which has a type of {}
- So
U >= {}
- The least upper bound of these two constraints is
{}. So the type checker infers U ~ {}.
Why doesn't it realise that the return type is IFoo & IBar? The type checker doesn't reason about the runtime behaviour of your code - that flatten's parameter takes on a variety of different types throughout the reduction. An array of type (IFoo | IBar)[] is not guaranteed to have both IFoos and IBars in it - it could just be an array of IFoos. Deducing that flattening an heterogeneous list squashes its constitutent types down would require quite a sophisticated proof, and it doesn't seem reasonable to expect a machine to be able to write such a proof for you.