Skip to main content
added 115 characters in body
Source Link

Tl;dr: Should we return null and not know origins of the error or throw exceptions and handle them appropriately?

A few years ago I found this article: http://stackify.com/golden-rule-programming/

It says:

If it can be null, it will be null

This kind of thinking leads to defensive programming throughout the application, tons of ifnulls everywhere and in theory SHOULD never get a NullReferenceException and be more efficient compared to throwing exceptions everywhere if null is encountered.

On the other hand, if we return null in all layers of the application, we cannot know for sure where did the null value originate from.

The opposite is when throwing exceptions, because we can know origin of the error for sure.

Returning null value everywhere can come from data layer, from the database, it can even be business logic trying to make sense of data and failing to do so... Basically, it can be anything.

Therefore, I see two (or three) camps:

  1. If it can be null, it should be null
  2. Throw exceptions everywhere, if you encounter null value, where it is not anticipated
  3. Mix the first two

Forehand, I think 3. Mixing the first two leads to a less predictable system and therefore slower pace of development, which is not something we want, right?

Using the 1. one leads to not knowing where did the null came from, but it is more efficient way to handle a big load (billions of requests).

The 2. is not as efficient performance wise, but is more verbose when diagnosing errors, by just looking at the logs or debugging.

The question itself is really conceptual and the answer I would like to get would be based on experience working with big scale applications. Really looking forward to what you think :)

A few years ago I found this article: http://stackify.com/golden-rule-programming/

It says:

If it can be null, it will be null

This kind of thinking leads to defensive programming throughout the application, tons of ifnulls everywhere and in theory SHOULD never get a NullReferenceException and be more efficient compared to throwing exceptions everywhere if null is encountered.

On the other hand, if we return null in all layers of the application, we cannot know for sure where did the null value originate from.

The opposite is when throwing exceptions, because we can know origin of the error for sure.

Returning null value everywhere can come from data layer, from the database, it can even be business logic trying to make sense of data and failing to do so... Basically, it can be anything.

Therefore, I see two (or three) camps:

  1. If it can be null, it should be null
  2. Throw exceptions everywhere, if you encounter null value, where it is not anticipated
  3. Mix the first two

Forehand, I think 3. Mixing the first two leads to a less predictable system and therefore slower pace of development, which is not something we want, right?

Using the 1. one leads to not knowing where did the null came from, but it is more efficient way to handle a big load (billions of requests).

The 2. is not as efficient performance wise, but is more verbose when diagnosing errors, by just looking at the logs or debugging.

The question itself is really conceptual and the answer I would like to get would be based on experience working with big scale applications. Really looking forward to what you think :)

Tl;dr: Should we return null and not know origins of the error or throw exceptions and handle them appropriately?

A few years ago I found this article: http://stackify.com/golden-rule-programming/

It says:

If it can be null, it will be null

This kind of thinking leads to defensive programming throughout the application, tons of ifnulls everywhere and in theory SHOULD never get a NullReferenceException and be more efficient compared to throwing exceptions everywhere if null is encountered.

On the other hand, if we return null in all layers of the application, we cannot know for sure where did the null value originate from.

The opposite is when throwing exceptions, because we can know origin of the error for sure.

Returning null value everywhere can come from data layer, from the database, it can even be business logic trying to make sense of data and failing to do so... Basically, it can be anything.

Therefore, I see two (or three) camps:

  1. If it can be null, it should be null
  2. Throw exceptions everywhere, if you encounter null value, where it is not anticipated
  3. Mix the first two

Forehand, I think 3. Mixing the first two leads to a less predictable system and therefore slower pace of development, which is not something we want, right?

Using the 1. one leads to not knowing where did the null came from, but it is more efficient way to handle a big load (billions of requests).

The 2. is not as efficient performance wise, but is more verbose when diagnosing errors, by just looking at the logs or debugging.

The question itself is really conceptual and the answer I would like to get would be based on experience working with big scale applications. Really looking forward to what you think :)

Source Link

Null values handling in big scale applications

A few years ago I found this article: http://stackify.com/golden-rule-programming/

It says:

If it can be null, it will be null

This kind of thinking leads to defensive programming throughout the application, tons of ifnulls everywhere and in theory SHOULD never get a NullReferenceException and be more efficient compared to throwing exceptions everywhere if null is encountered.

On the other hand, if we return null in all layers of the application, we cannot know for sure where did the null value originate from.

The opposite is when throwing exceptions, because we can know origin of the error for sure.

Returning null value everywhere can come from data layer, from the database, it can even be business logic trying to make sense of data and failing to do so... Basically, it can be anything.

Therefore, I see two (or three) camps:

  1. If it can be null, it should be null
  2. Throw exceptions everywhere, if you encounter null value, where it is not anticipated
  3. Mix the first two

Forehand, I think 3. Mixing the first two leads to a less predictable system and therefore slower pace of development, which is not something we want, right?

Using the 1. one leads to not knowing where did the null came from, but it is more efficient way to handle a big load (billions of requests).

The 2. is not as efficient performance wise, but is more verbose when diagnosing errors, by just looking at the logs or debugging.

The question itself is really conceptual and the answer I would like to get would be based on experience working with big scale applications. Really looking forward to what you think :)