Skip to main content
added 32 characters in body
Source Link
user204677
user204677

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable (unchanging) and tiny image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else and builds in a fraction of a second.

B) An unstable (rapidly changing) image library which depends on an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well. It takes 15 minutes to clean build the whole thing.

Then... then obviously it should be a no-brainer to most people that A, and actually precisely due to its minor code duplication, is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable (as in unchanging, finding few reasons to change in the future) and reliable whose dependencies to outside sources, if there are any, are very stable, over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable and tiny image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else and builds in a fraction of a second.

B) An unstable image library which depends on an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well. It takes 15 minutes to clean build the whole thing.

Then obviously it should be a no-brainer to most people that A is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable (as in unchanging, finding few reasons to change in the future) and reliable whose dependencies to outside sources, if there are any, are very stable, over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable (unchanging) and tiny image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else and builds in a fraction of a second.

B) An unstable (rapidly changing) image library which depends on an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well. It takes 15 minutes to clean build the whole thing.

... then obviously it should be a no-brainer to most people that A, and actually precisely due to its minor code duplication, is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable (as in unchanging, finding few reasons to change in the future) and reliable whose dependencies to outside sources, if there are any, are very stable, over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.

added 89 characters in body
Source Link
user204677
user204677

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable and tiny image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else and builds in a fraction of a second.

B) An unstable image library which usesdepends on an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well. It takes 15 minutes to clean build the whole thing.

Then obviously it should be a no-brainer to most people that A is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable (as in unchanging, finding few reasons to change in the future) and reliable whose dependencies to outside sources, if there are any, are very stable, over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else.

B) An unstable image library which uses an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well.

Then obviously it should be a no-brainer to most people that A is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable and reliable over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable and tiny image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else and builds in a fraction of a second.

B) An unstable image library which depends on an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well. It takes 15 minutes to clean build the whole thing.

Then obviously it should be a no-brainer to most people that A is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable (as in unchanging, finding few reasons to change in the future) and reliable whose dependencies to outside sources, if there are any, are very stable, over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.

Source Link
user204677
user204677

I'm actually going to echo the least popular opinion here and side with Gangnus and suggest that code duplication isn't always harmful and could sometimes be the lesser evil.

If, say, you give me the option of using:

A) A stable image library, well-tested, which duplicates a few dozen lines of trivial mathematical code for vector math like dot products and lerps and clamps, but is completely decoupled from anything else.

B) An unstable image library which uses an epic math library to avoid that couple dozen lines of code mentioned above, with the math library being unstable and constantly receiving new updates and changes, and therefore the image library also having to be rebuilt if not outright changed as well.

Then obviously it should be a no-brainer to most people that A is preferable. The key emphasis I need to make is the well-tested part. Obviously there's nothing worse than having duplicated code which doesn't even work in the first place, at which point it's duplicating bugs.

But there's also coupling and stability to think about, and some modest duplication here and there can serve as a decoupling mechanism which also increases the stability (unchanging nature) of the package.

So my suggestion is actually going to be to focus more on testing and trying to come up with something really stable and reliable over trying to stamp out all forms of duplication in your codebase. In a large team environment, the latter tends to be an impractical goal, not to mention that it can increase the coupling and the amount of unstable code you have in your codebase.