Stratus3D

A blog on software engineering by Trevor Brown

Fibonacci Algorithms in Elixir - Part 2

Another look at algorithm time complexity

In my last post I wrote about various Fibonacci implementations in Elixir. I timed each implementation generating a list of the first N Fibonacci numbers, and compared the performance characteristics of each implementation. Based on what unwind said on Lobsters in response I decided to revisit Fibonacci implementations in Elixir. I’m going to update the implementatios I used in my last blog post so they use Erlang processes to store previously computed Fibonacci numbers. I’ll then benchmark each implementation against the Fibonacci implementation to see how much processes sped things up.

Rewriting the Algorithms

I needed to have a process for each algorithm to store the computed Fibonacci numbers. I chose to create a single generic GenServer that I could spawn for each algorithm I was testing. Below is the implementation I settled on. It’s fairly straightforward. The only thing special about this GenServer code is that it allows the caller to specify the server they want to use. This is necessary because I will have multiple processes running different instances of this GenServer. One for each implementation I am testing. I need to have a different FibStore server because each algorithm is different so I need to make sure I store and fetch Fibonacci numbers from the correct server.

defmodule FibStore do
  use GenServer

  def do_fib(name, number, fib_fun) do
    case get(name, number) do
      nil ->
        result = fib_fun.(number)
        put(name, number, result)
        result
      result ->
        result
    end
  end

  defp get(name, number) do
    GenServer.call(name, {:get, number})
  end

  defp put(name, number, value) do
    GenServer.call(name, {:put, number, value})
  end

  def maybe_start(name) do
    case GenServer.start_link(__MODULE__, [], [name: name]) do
      {:ok, _pid} ->
        :ok
      {:error, {:already_started, _pid}} ->
        :ok
    end
  end

  # GenServer callbacks
  def init(_) do
    {:ok, %{}}
  end

  def handle_call({:get, number}, _from, state) do
    {:reply, Map.get(state, number), state}
  end

  def handle_call({:put, number, value}, _from, state) do
    {:reply, :ok, Map.put(state, number, value)}
  end
end

Next I needed to update the existing functions to use this new GenServer for fetching and storing values. The previous implementations had a fib function that performed the actual computation. To avoid modifying a lot of code I added a new function named do_fib that only calls the existing fib function to do the computation if the Fibonacci number has not already been computed and stored in the GenServer instance. Below are the three new implementations:

My Implementation

In my updated implementation so that the fib clause for generating any number after the first two in the Fibonacci sequence invokes FibStore.do_fib/3. This ensures it will reuse already an computed number if there is one. Otherwise FibStore.do_fib/3 will invoke fib/1 to compute and store the number.

defmodule MyFib do
  def fibonacci(number) do
    FibStore.maybe_start(__MODULE__)
    Enum.reverse(FibStore.do_fib(__MODULE__, number, &fib/1))
  end

  def fib(0), do: [0]
  def fib(1), do: [1|fib(0)]
  def fib(number) when number > 1 do
    [x, y|_] = all = FibStore.do_fib(__MODULE__, number-1, &fib/1)
    [x + y|all]
  end
end

Rosetta Code

In much the same way I changed the Rosetta Code algorithm. It invokes FibStore.do_fib/3 when computing Fibonacci numbers so it uses pre-computed numbers if exist in the FibStore process. Otherwise it invokes fib/1 to compute the Fibonacci number. Note that unlike my implementation this one must recompute everything when generating a new number in the sequence. For example, when generating the fifth number it would not be able to reuse the 2 and the 3 cached in the FibStore process. I could not make it reuse pre-computed numbers without changing the way the fib/3 function works.

defmodule RosettaCodeFib do
  def fibonacci(number) do
    FibStore.maybe_start(__MODULE__)
    Enum.map(0..number, fn(n) -> FibStore.do_fib(__MODULE__, n, &fib/1) end)
  end

  def fib(0), do: 0
  def fib(1), do: 1
  def fib(n), do: fib(0, 1, n-2)

  def fib(_, prv, -1), do: prv
  def fib(prvprv, prv, n) do
    next = prv + prvprv
    fib(prv, next, n-1)
  end
end

This implementation benefited the most from the FibStore caching of pre-computed numbers. It remains similar to the original implementation, but now it invokes FibStore.do_fib/3 when computing numbers. Due to the recursive nature of the fib/1 function, calls are made to FibStore.do_fib/3 for every number that must be computed. This implementation is able to use the pre-computed numbers as a starting point when computing new numbers. Unlike the Rosetta Code implementation which cannot.

defmodule ThomasFib do
  def fibonacci(number) do
    FibStore.maybe_start(__MODULE__)
    Enum.map(0..number, fn(n) -> do_fib(n, &fib/1) end)
  end

  def fib(0), do: 0
  def fib(1), do: 1
  def fib(n), do: do_fib(n-1, &fib/1) + do_fib(n-2, &fib/1)

  defp do_fib(number, fun) do
    FibStore.do_fib(__MODULE__, number, fun)
  end
end

Which function generates a list of Fibonacci numbers the fastest?

My bet was that my implementation would still perform better than the others, but I wasn’t sure which of the others would be the fastest. The Rosetta Code algorithm wasn’t able to leverage the FibStore caching as much as others; but then again it was already a fairly fast algorithm.

Benchmarking

I reused my benchmarking code from the first blog post. See the source file for the benchmarking code.

Since the new algorithms now maintain state their performance on the first run will differ from their performance on all subsequent runs. I decided I would run the benchmarks once to capture the performance on initial runs. Then I would run the benchmarks four more times to measure the performance on subsequent runs just as I did in my first blog post.

Results

Initial Run

I ran the benchmark once to capture the performance when no state. Time is in microseconds.

Number Rosetta Code Dave Thomas'
Mine
3 36 83 30
5 22 56 16
10 76 69 31
20 87 111 48
30 144 133 56
45 143 205 87

Four Subsequent Runs

After the first run I ran the benchmark again against each algorithm four times. The average run times are shown in the table below in microseconds.

Number Rosetta Code Dave Thomas'
Mine
3 13 22 5
5 17 15 3
10 21 25 5
20 55 40 4
30 64
65 3
45 78 94 4

For comparison here are the average run times of the original algorithms:

List Size Rosetta Code Dave Thomas' Mine
3 4 543 1
5 2 3 0
10 8 11 0
20 5 880 4
30 8 97500 1
45 17 131900822 2

Conclusion

Looking at the data in these tables it’s clear the Dave Thomas algorithm benefited the most from the new process caching of computed numbers. This isn’t surprising due to the increasing number of recursive calls the original algorithm used when computing large numbers. It’s clear from the run times that the time complexity of the original algorithm was exponential. With the process caching in place the time complexity is no longer exponential. I didn’t take the time to figure it out, but I would guess the new Dave Thomas’ algorithm with process caching has linear time complexity.

The Rosetta Code algorithm doesn’t really benefit from the new process caching. The performance characteristics remain the same, and it runs nearly 5 times slower. My algorithm did not benefit from the process caching either. It ran about 4 times slower than the original algorithm. Even though processes are cheap on the Erlang VM, and local messages are fast it’s clear the overhead of caching the data in a separate process is taking a toll on these algorithms. Sending a message is cheap, but eventually the time it takes for the process to be scheduled after receiving a message adds up. When no data is cached these algorithms must send and receive at least 2 messages for every recursion needed to compute the final number. Even when the number has already been computed two messages are needed to fetch the number. The two original algorithms performed well by keep pre-computed numbers in process memory and reusing them when necessary, so this caching only reduces the arithmetic operations needed on subsequent calls. And even on subsequent calls the overhead of sending and receiving two messages is greater than the cost of computing the list of Fibonacci numbers all over again; at least for the first 45 Fibonacci numbers in the sequence.

When it comes to generating a list of the first N Fibonacci numbers my original algorithm still seems to be the fastest out of all the algorithms I’ve tested. My algorithm was designed from the ground up for specifically for generating a list of the first N Fibonacci numbers in the sequence so I think the take away here is that code written for a specific task may outperform code that is more general purpose.

Resources

Fibonacci Algorithms in Elixir

A look at algorithm time complexity

I recently was asked to write a function in Elixir to generate a list Fibonacci numbers. I had forgotten the typical implementation of the algorithm for generating Fibonacci numbers but since my function was returning a list I knew I could just add the two previous numbers in the list to compute the next number. I quickly threw together an simple 8 line function and that took a number and returned a list with that many of first Fibonacci numbers in it. After writing it and testing it out I realized my function was somewhat different from the implementations I had seen. My code looked like this:

defmodule Fibonacci do
  def fibonacci(number) do
   Enum.reverse(fibonacci_do(number))
  end

  def fibonacci_do(1), do: [0]
  def fibonacci_do(2), do: [1|fibonacci_do(1)]
  def fibonacci_do(number) when number > 2 do
    [x, y|_] = all = fibonacci_do(number-1)
    [x + y|all]
  end
end

With my implementation we have a function called fibonacci_do/1 with three clauses, the first two are for the first and second Fibonacci numbers and the third generates all the rest of the numbers in the sequence by adding the previous two numbers in the list and returning a list with the new number added. The function actually generates a list with the numbers in reverse order, so I defined the fibonacci/1 function to reverse the list. This function can generate the first 100,000 Fibonacci numbers in less that a second. Not too bad I thought.

Common Fibonacci in Elixir

After doing this I went and looked at the other Elixir implementations. Here are two of them:

Rosetta Code algorithm:
defmodule Fibonacci do
    def fib(0), do: 0
    def fib(1), do: 1
    def fib(n), do: fib(0, 1, n-2)

    def fib(_, prv, -1), do: prv
    def fib(prvprv, prv, n) do
        next = prv + prvprv
        fib(prv, next, n-1)
    end
end

IO.inspect Enum.map(0..10, fn i-> Fibonacci.fib(i) end)
From a gist by Dave Thomas, most of the Elixir implementations followed this pattern:
def fib(0), do: 0
def fib(1), do: 1
def fib(n), do: fib(n-1) + fib(n-2)

Now this variation is by far the most common. I’ve seen this implementation in several slide decks at Elixir conferences, and I’ve seen it used as example code many times, so I’m unsure of its origin. Dave Thomas presented this implementation at the first ElixirConf because it mirrored the mathematical formula for generating Fibonacci numbers. In this blog post I’ll call this the Dave Thomas implementation. If you were to naively use this function to generate a list of Fibonacci numbers you’d most likely do something just like the Enum.map/2 call in the Rosetta Code example above:

IO.inspect Enum.map(0..10, fn i-> fib(i) end)

Which function generates a list of Fibonacci numbers the fastest?

Looking at these three algorithms it’s clear they are have some similarities. All of them are recursive functions that start with the base cases for first Fibonacci numbers, 0 and 1. All of them take a single argument, the number in the Fibonacci sequence to generate.

But these algorithms have some key differences as well.

My function calls itself recursively and builds up a list of numbers in the sequence and returns the list directly.

The Rosetta Code function also recursively calls itself once until it has generated a number. Since it only generates a single number it must be executed multiple times to generate a list of numbers in the sequence.

The Dave Thomas algorithm differs from the two others in that it recursively calls itself not once but twice. Just like the Rosetta Code algorithm it also must be executed multiple times to generate a list of numbers in the sequence. At only 4 lines it’s by far the most succinct of the three.

Benchmarking

I opted to do simple benchmarking of these three algorithms. The benchmark script feeds the function a number N and expects the function to return a list containing the first N numbers of the first Fibonacci sequence. Only my algorithm returned a list directly, so for the other two algorithms I created a function that would map over the range and repeatedly call the fibonacci function:

def fibonacci(number) do
  Enum.map(0..number, fn(n) -> fib(n) end)
end

Results

I ran the benchmark against each algorithm four times. The average run times are shown in the table below in microseconds.

List Size Rosetta Code Dave Thomas Mine
3 4 543 1
5 2 3 0
10 8 11 0
20 5 880 4
30 8 97500 1
45 17 131900822 2

I’m not sure why the Dave Thomas algorithm was so slow at computing a list of the first 3 Fibonacci numbers. My guess is that CPU core was busy with something else when that number was benchmarked and it skewed the results.

As you can see these three algorithms perform very differently as the length of the list they must generate grows. Up to around 10 items the Rosetta Code algorithm and Dave Thomas algorithm perform about the same. After 10 items the run time for the Dave Thomas algorithm quickly climbs, for a list of 30 it takes nearly 1/10 a second, and for a list of 45 it takes over two minutes. The Rosetta Code algorithm performs much better. Only taking around 17 microseconds to compute a list of the first 45 Fibonacci numbers. My algorithm appears to have similar performance characteristics to the Rosetta Code algorithm, but with times that averaged less than a quarter the run time of the of the Rosetta Code algorithm. Note that the timing code was rounding to the nearest microsecond, so some computations took less than half a microsecond for my algorithm.

I decided to do a little more benchmarking of my algorithm and the Rosetta code algorithm. Since both algorithms seemed pretty fast I tried using them to generate much larger lists of Fibonacci numbers. The results are shown in the table below. Again, time is in microseconds.

List Size Rosetta Code Mine
100 76 5
500 2892 36
1000 15058 78
5000 631680 870
10000 4152261 4767
100000 > 5 minutes (never returned) 1195938

Clearly my algorithm performs better as it doesn’t have to generate each number from scratch each time it computes a new number in the resulting list. As the list it must generate grows in size the Rosetta Code algorithm’s run time grows exponentially. For generating a list of Fibonacci numbers it’s clear my algorithm performs the best.

Conclusion

Clearly these three algorithms have very different performance characteristics. For generating a single Fibonacci number the Rosetta Code function will work fine. My function will also work fine for generating a single Fibonacci number, but will use more memory due to the list that it builds. The Dave Thomas algorithm performs poorly for anything beyond the first 30 Fibonacci numbers and probably shouldn’t be used for anything other than exercises like this.

My algorithm, which was designed to generate a list of Fibonacci numbers, turns out to be the best algorithm for generating a list of Fibonacci numbers. Looping over the Fibonacci functions for the other algorithms greatly degrades their performance. It’s better to design a new Fibonacci function that generates the full list in one recursive call rather than reusing an existing Fibonacci function that only generates one number at a time to build the list.

Resources

  • fib.ex
  • https://rosettacode.org/wiki/Fibonacci_sequence#Elixir
  • https://gist.github.com/pragdave/f8c7684b69d235269139bad0a2419273
  • https://www.stridenyc.com/blog/tail-call-optimization-with-fibonacci-in-elixir
  • https://elixirforum.com/t/pragdave-fibonacci-solution/11174/8
  • https://www.nayuki.io/page/fast-fibonacci-algorithms

Maintaining an Open Source Project

This was a talk I gave at the Sarasota Software Engineers User Group in Sarasota on July 26, 2018

Maintaining open source software is often time consuming and difficult. Open source maintainers have to deal with reports of hard to reproduce bugs, incomplete and buggy pull requests, requests for support, and bug reports mistakenly opened by confused users. Despite the challenges maintainers face they can still greatly benefit from the support of their software’s users and contributors. Through proper organization and automation maintainers can better manage their project and have more time to focus on the long term goals of the software. In this talk I will talk about my experience working as a maintainer of asdf, an open source version management tool. I’ll talk about how I got started as a contributor and what I learned throughout my work. I’ll share the techniques I have learned that make open source maintenance easier. You will learn how to apply these techniques to your own open source (and closed source) projects to improve efficiency and speed of development. Geared for those have some experience with software development and want to begin contributing to open source projects or get better at maintaining existing software.

My Experience as a Maintainer of asdf

In the talk I gave I talked a lot about my experience with asdf and how I got started in open source. I explained how asdf works, talked about my work as a maintainer, and presented some of the tools that have helped us build asdf. All of these things can be found in other places, I previously written about how asdf works and the tools we use for asdf can all be seen at work in the build for the project. In this post I’m only going to list the advice I gave at the end of the talk. First is advice for users and contributors.

Advice for users and contributors

  • Fastest way to get something done is to do it yourself
  • Be extremely detailed when reporting bugs and proposing features
  • PRs and issue tracker are your friend
    • Look and see if the issue or patch you want to contribute already exists
    • Look at past PRs and comments to determine if your changes would be welcome
  • Don’t get discouraged by lack of attention
  • Don’t assume maintainers know more than you
  • Have a backup plan

Advice for maintainers

Automate automate automate!

  • Automate all repetitive tasks that can be automated
  • Linting, tests, builds, releases/tags, deployments
  • Try to codify all requirements in something that’s automated

Be nice, But also be strict

  • Be thankful for all contributions
  • Be open to different solutions
  • But don’t merge PRs that:
    • Are confusing
    • For bugs that are not understood or documented
    • Violate project standards
    • Negatively affect the existing code in the codebase
  • Don’t give someone commit access until they’ve proven themselves with their contributions

Encourage contribution

  • Make it easy to contribute
  • Have well defined standards for contributions that users can read
  • Point people in the right direction
  • Offer contributors help when it makes sense

Be organized

  • Repositories, files, and directories should have descriptive names
  • All code should have corresponding unit tests
  • Everything should be under version control (GitHub makes this easy)
  • Software should versioned
    • Tagging should be automated
    • Users should be encouraged to use the latest stable version when installing
    • Users should have an easy way to view version and other environmental information to make bug reporting easier

Have good docs

  • Everything should be documented
    • Usage, APIs, bug reporting, contribution guidelines, standards, review and release processes, and on and on…
  • Have official documentation that goes through a review process
  • Have a wiki so users can easily share unofficial documentation on specific use cases of the software

Resources

  • BATS - https://github.com/sstephenson/bats
  • ShellCheck - https://www.shellcheck.net/
  • Travis CI - https://travis-ci.org/
  • asdf stats - https://porter.io/github.com/asdf-vm/asdf
  • My original question on Hacker News - https://news.ycombinator.com/item?id=10254157
  • https://www.youtube.com/watch?v=dIageYT0Vgg
  • https://www.youtube.com/watch?v=q3ie1duhpCg