A small, neglected project has recently been reported by users as having time out errors and some object reference exceptions. As I am a recent replacement for a developer who was responsible for this project, I didn't have the solution installed on my machine.
In the process of trying to get a local build to run, I'm finding that several referenced Nuget packages are deprecated so the solution fails to build. At this morning's standup, I mentioned to the team that I am hunting for the deprecated packages so that I can debug locally, or at least attach a debugger to QA/Prod, and then will continue to investigate the issues. A coworker who is sort of "handing off" the project chimed in that they got it to build locally by updating the local solution from .NET 3.5 to .NET 4.5 and upgrading to new Nuget packages.
I thank them for their remark and retorted that I am trying to debug some immediate issues on production so I'd like to replicate the QA and Production environments as closely as I can; their response was one of pure confusion and protest. I explained that production has many user-reported errors, not including those being logged by the server's reporting mechanisms, and that I didn't want to add more fuel to the fire by upgrading the solution's framework. The colleague replied that they don't suggest I upgrade production, just the local build, and debug that way.
This goes against everything my mentors and senior engineers have taught me about having local builds as close to production as possible, and to me, that has always been sensible advice. Am I being too hard-headed here? If you think the code fix won't be framework dependent, should you just do whatever you can to quickly get a build running to debug?