Does one know a debugging strategy which can help to reduce the waiting time until a breakpoint is reached in the area of the code which is suspected to contain the cause?
You can still profile debug functions. I wouldn't spend too much time on it, especially with a bug to fix, but if you have a glaring hotspot that takes 30% of the time or more in debug, and the solution is uber simple, then it might make a real productivity difference if devs are used to spending 30+ mins waiting on their debug builds to do something and even help you fix that bug sooner.
In my case we had a matrix/vector library one time which performed handsomely in release/production builds but we sprinkled assertions all over the place to make sure we weren't accessing arrays out of bounds and so forth in debug builds. I found hotspots there that were gross (like over 50% of time spent in such functions due to asserts all over the place), and ended up writing a unit test which made sure those functions behaved properly and made only those asserts applied when something like a DEBUG_MATH_LIB (I forgot what I called it, this was many years ago) was defined. That actually sped up running the software in debug mode substantially in ways where it when from complete torture to try to reproduce issues in the debugger to not too unbearable (debug was still maybe 20 times slower than release, but not 200 times slower).
If you are trying to reproduce a bug which results in faulty output (or crash or whatever) and it takes ages, then usually my number one technique if applicable when I used to deal with that a lot was to try to play with the content with a "binary search" mentality, and originally in a release build (I'm not running the debugger).
I might try loading one half of the content (after saving the whole content minus half of it deleted) and see if it glitches out. If it doesn't, then I try the other half and find that it glitches out. Then I divide that second half into quarters, and so forth, until it takes just seconds to load and reproduce the issue. After that the main issue that this problem takes so long to reproduce is solved, and the rest is the painful process of debugging it absent the bottleneck of waiting to reproduce it. It's been many years since I had to do that sort of thing (I've found designs which avoid this type of thing for the most part), but my hazy memories from the past recalls doing this stuff a lot.
Most of the time you shouldn't need to load 3 gigabytes of data to reproduce an issue. The first step to me is not figuring out the issue but making it faster to reproduce with that "binary search" mentality, halving the data until it's so small that it loads in an eye blink and causes the issue. Then I engage my "debugging mode". That narrowing down of the data also tends to make it more obvious what part of it might be tripping up the software.
Ideally I would like to save the process state right before (what I believe is) the buggy section and then debug from that point. I dont think this is possible.
On windows at least, you can right click on any running process and output a dump file.

You can then open the resulting .DMP file in, say, Visual Studio and jump right into debugging it.
Unfortunately that doesn't allow resuming the process to my knowledge and letting it finish (making dumps usually only useful to me when the software crashes). However, it can be a way to sort of save a stack dump and inspect things and be able to come back and look at them some more.