Was wondering if someone could give a practical understanding of what it means for an algorithm to perform in (log a / log b) time?
In other words, practically speaking for an algorithm to perform in nlogn time it would need to split a max of log n times and perform n amount of work at each level. For log n performance it would split a max of log n times and perform at most a constant amount of work at each level, etc.
What would be the equivalent way of explaining the performance of a log a / log b algorithm?
Thanks
aandbwhich executes in time proportional to the graph oflog a / log b. It would take about a time unit whenaandbare similarly sized. Whenbis much larger thanathen it will execute faster, whenais much larger thanbit will execute slower. the scale is logarithmic so as the values ofaandbget larger total execution time becomes more stable, simply adding 1 to either will have little effect. however whenaandbare smaller adding 1 will be much more noticeable.log a / log bequalslog a base b. When we talk about logaritmic complexities, we usualy mean log base 2 which comes from binary nature of computer data representations. Now imagine merge sort algo which instead of bisecting, trisects. Its complexity would be nlog n base 3 which Is nlog n / log 3.log nbut instead of the word "Split" use word "bisect". And Now change the word to "b-sect" And you got yourself a description of log n / log b.