Skip to main content
replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link

See this StackOverflow answerStackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.


In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them. In this case:

x :: Num a => a
y :: Num a => a
z :: Num a => a

(The lowercase a here denotes a polymorphic type, often called "generics" in other languages like C++. The Num a => part is a constraint to indicate that the type a support have some notion of addition.)

Here's a more interesting example: the fixed-point combinator that allows any recursive function to be defined:

let fix f = f (fix f)

Notice that nowhere have we specified the type of f, nor did we specify the type of fix, yet the Haskell compiler can automatically figure out that:

f :: t -> t
fix :: (t -> t) -> t

This says that:

  • The parameter f must be a function from some arbitrary type t to the same type t.
  • fix is a function that receives a parameter of type t -> t and returns a result of type t.

See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.


In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them. In this case:

x :: Num a => a
y :: Num a => a
z :: Num a => a

(The lowercase a here denotes a polymorphic type, often called "generics" in other languages like C++. The Num a => part is a constraint to indicate that the type a support have some notion of addition.)

Here's a more interesting example: the fixed-point combinator that allows any recursive function to be defined:

let fix f = f (fix f)

Notice that nowhere have we specified the type of f, nor did we specify the type of fix, yet the Haskell compiler can automatically figure out that:

f :: t -> t
fix :: (t -> t) -> t

This says that:

  • The parameter f must be a function from some arbitrary type t to the same type t.
  • fix is a function that receives a parameter of type t -> t and returns a result of type t.

See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.


In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them. In this case:

x :: Num a => a
y :: Num a => a
z :: Num a => a

(The lowercase a here denotes a polymorphic type, often called "generics" in other languages like C++. The Num a => part is a constraint to indicate that the type a support have some notion of addition.)

Here's a more interesting example: the fixed-point combinator that allows any recursive function to be defined:

let fix f = f (fix f)

Notice that nowhere have we specified the type of f, nor did we specify the type of fix, yet the Haskell compiler can automatically figure out that:

f :: t -> t
fix :: (t -> t) -> t

This says that:

  • The parameter f must be a function from some arbitrary type t to the same type t.
  • fix is a function that receives a parameter of type t -> t and returns a result of type t.
added 958 characters in body
Source Link
Rufflewind
  • 2.2k
  • 2
  • 16
  • 19

See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.

 

In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them. In this case:

x :: Num a => a
y :: Num a => a
z :: Num a => a

(The lowercase a here denotes a polymorphic type, often called "generics" in other languages like C++. The Num a => part is a constraint to indicate that the type a support have some notion of addition.)

Here's a more interesting example: the fixed-point combinator that allows any recursive function to be defined:

let fix f = f (fix f)

Notice that nowhere have we specified the type of f, nor did we specify the type of fix, yet the Haskell compiler can automatically figure out that:

f :: t -> t
fix :: (t -> t) -> t

This says that:

  • The parameter f must be a function from some arbitrary type t to the same type t.
  • fix is a function that receives a parameter of type t -> t and returns a result of type t.

See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.

In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them.

See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.

 

In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them. In this case:

x :: Num a => a
y :: Num a => a
z :: Num a => a

(The lowercase a here denotes a polymorphic type, often called "generics" in other languages like C++. The Num a => part is a constraint to indicate that the type a support have some notion of addition.)

Here's a more interesting example: the fixed-point combinator that allows any recursive function to be defined:

let fix f = f (fix f)

Notice that nowhere have we specified the type of f, nor did we specify the type of fix, yet the Haskell compiler can automatically figure out that:

f :: t -> t
fix :: (t -> t) -> t

This says that:

  • The parameter f must be a function from some arbitrary type t to the same type t.
  • fix is a function that receives a parameter of type t -> t and returns a result of type t.
Source Link
Rufflewind
  • 2.2k
  • 2
  • 16
  • 19

See this StackOverflow answer regarding Go's type inference. I'm not familiar with Go myself but based on this answer it seems like a one-way "type deduction" (to borrow some C++ teminology). It means that if you have:

x := y + z

then the type of x is deduced by figuring out the type of y + z, which is a relatively trivial thing to do for the compiler. To do this, the types of y and z need to be known a priori: this could be done via type annotations or inferred from the literals assigned to them.

In contrast, most functional languages have type inference that uses all possible information within a module (or function, if the inference algorithm is local) to derive the type of the variables. Complicated inference algorithms (such as Hindley-Milner) often involve some form of type unification (a bit like solving equations) behind the scenes. For example, in Haskell, if you write:

let x = y + z

then Haskell can infer the type not just x but also y and z simply based on the fact that you're performing addition on them.