This question is difficult to answer because it is opinion-based. str.replace is definitely faster. Using timeit in ipython with Python 3.4.2:
In []: %timeit zz.replace(zz[36:zz.strip('&Log=0').rfind('&')],'')
100000 loops, best of 3: 2.04 µs per loop
In []: %timeit re.sub('dealer.+Radius=10','',zz)
100000 loops, best of 3: 2.83 µs per loop
As Padraic Cunningham pointed out, the difference is even greater in Python 2:
In []: %timeit zz.replace(zz[36:zz.strip('&Log=0').rfind('&')],'')
100000 loops, best of 3: 2 µs per loop
In []: %timeit re.sub('dealer.+Radius=10','',zz)
100000 loops, best of 3: 3.11 µs per loop
Which one is better depends on the program. Generally, for Python, readability is more important than speed (because the standard PEP 8 style is based on the notion that code is read more than written). If speed is vital for the program, the faster option str.replace would be better. Otherwise, the more readable option re.sub would be better.
EDIT
As Anony-Mousse pointed out, using re.compile instead is both faster and more readable than both. (You added that you're using Python 2, but I'll put the Python 3 test first to reflect the order of my other tests above.)
With Python 3:
In []: z_match = re.compile('dealer.+Radius=10')
In []: %timeit z_match.sub('', zz)
1000000 loops, best of 3: 1.36 µs per loop
With Python 2:
In []: z_match = re.compile('dealer.+Radius=10')
In []: %timeit z_match.sub('', zz)
100000 loops, best of 3: 1.68 µs per loop
1.91 µsvs3.37 µsfor re, python 3 is2.1µsvs2.4µsurlparse.urlparse()andurlparse.parse_qs()... It would be slower though.spl = zz.rsplit("&",2)(zz[:36] + "&{}&{}".format(spl[-2], spl[-1]))takes1.17 µsandspl = zz.rsplit("&",2) (zz[:36] + "&"+spl[-2]+"&" + spl[-1])takes935 ns, but any version would break quite easily.