2

This works :

f :: Int -> Int
f n = gof n where
      gof 0 = 1
      gof i = i - ms!! ( fs!! (i-1) )
      gom 0 = 0
      gom i = i - fs!! ( ms!! (i-1) )
      fs = [gof j | j <- [0..n]]
      ms = [gom j | j <- [0..n]]

m n = gom n where
      gof 0 = 1
      gof i = i - ms!! ( fs!! (i-1) )
      gom 0 = 0
      gom i = i - fs!! ( ms!! (i-1) )
      fs = [gof j | j <- [0..n]]
      ms = [gom j | j <- [0..n]]

However it is really repetitive. Is there a way to avoid just repeating those chunks of code? A few references, this is an adaptation of :

http://jelv.is/blog/Lazy-Dynamic-Programming/

Sequence ref :

https://en.wikipedia.org/wiki/Hofstadter_sequence

I checked it against the numbers :

https://oeis.org/A005378 https://oeis.org/A005379

It generates the right numbers and it is way faster than the basic code which won't go very high at all before it starts having issues with recursion depth.

2
  • Well, you can certainly share all the subdefinitions between m and f. This pair of sequences looks like you can do even better though, and generate them corecursively. Commented Aug 30, 2018 at 2:58
  • Note that !! on lists is inefficient, since it has a O(n) cost. If indexing can not be avoided, consider using some O(1) data structure, e.g. arrays. Also recursion depth should not be an issue unless it is really huge -- why do you mention that? (You are not using an ancient Hugs, right?) Commented Aug 30, 2018 at 9:45

2 Answers 2

4

First of all, you can pattern-match in a top-level binding. Usually it doesn't mean much interesting is going on, but if you want to share local helpers between two top-level bindings, it can help.

m2 :: Int -> Int
f2 :: Int -> Int
(m2, f2) = (gom, gof)
  where
    gof 0 = 1
    gof i = i - ms !! ( fs !! (i-1) )
    gom 0 = 0
    gom i = i - fs !! ( ms !! (i-1) )
    fs = map gof [0..]
    ms = map gom [0..]

You'll note there's one other trick in there. Instead of bounding the lists fs and ms to their maximum size, I just let laziness handle bounding them. The lists won't be created past where they're needed to memoize earlier results.

But list indexing is O(n). Getting rid of even some of it can be a significant speedup. If you look at the pattern of recursion along the same function, you'll see that gom i always calls gom (i-1), and the same with gof. You can use that to remove list indexing on those lookups by just passing along the previous value. Unfortunately, the same doesn't apply with the calls to the opposite function, as they don't follow so easily. But it's still removing a big amount of work. And it can be done in such a way as to utilize laziness even further:

m3, f3 :: Int -> Int
(m3, f3) = ((ms !!), (fs !!))
  where
    (ms, fs) = unzip pairs
    pairs = (0, 1) : zipWith iter [1..] pairs
    iter i (mp, fp) = (i - fs !! mp, i - ms !! fp)

The recursive helper functions have been replaced with simultaneous lazy creation of both result lists. This pattern differs from standard recursion in that it doesn't need a base case to reach, and it requires some sort of guard against trying to immediately find a base case before the complete answer can be provided. This pattern is known as co-recursion. (Or corecursion if I'm typing lazily.) Same idea, but it produces the answer in the opposite direction.

Sign up to request clarification or add additional context in comments.

Comments

3

Or you can just use one of many memoization packages which supports mutual recursive function. Below is the implementation that uses monad-memo which does require memoized function being defined in monadic form, but otherwise is just a direct translation of your original implementation.

import Control.Monad.Memo
import Control.Monad.ST

-- Same function in monadic form
gof 0 = return 1
gof i = do
  -- gof is memoized on level 0
  fs <- memol0 gof (i-1)
  -- gom is on level 1
  ms <- memol1 gom fs
  return (i - ms)

-- Same here
gom 0 = return 0
gom i = do
  ms <- memol1 gom (i-1)
  fs <- memol0 gof ms
  return (i - fs)

-- Eval monadic form into normal Int -> Int function
fm :: Int -> Int
-- Data.Map-based memoization cache
fm = startEvalMemo . startEvalMemoT . gof

mm :: Int -> Int
mm = startEvalMemo . startEvalMemoT . gom   

-- Or much faster vector-based memoization cashe
fmv :: Int -> Int
-- We use two separate caches: mutable unboxed vectors of `(n+1)` length
fmv n = runST $ (`evalUVectorMemo`(n+1)) . (`evalUVectorMemo`(n+1)) . gof $ n

mmv :: Int -> Int
mmv n = runST $ (`evalUVectorMemo`(n+1)) . (`evalUVectorMemo`(n+1)) . gom $ n

-- This is quite fast in comparison to the original solution
-- but compile it with -O2 to be able to compute `f 1000000`
main :: IO ()
main =
    print ((fm 100000, mm 100000),(fmv 1000000, mmv 1000000))

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.