Haskell doesn't seem to do automatic memoization. Or do I understand something wrong? No, Haskell doesn't.
However, shared expressions are calculated only once. In the example given by Paul Johnson x is stored on the heap as a thunk Both y and z can refer to x as x is in scope, and they refer to the same location. Once x has to be evaluated it will be evaluated only once and only the result of the evaluation will be kept.So this is not really memoisation but it is a result of the implementation Are there other languages which do automatic (i.e.
Implicit, not explicit) memoization? I've seen the decorator memoized come along in some python source code. You could of course completely create your own decorator / implementation for it.
Complete with LRU and other policies you want to use What are common ways to implement memoization? There is no real common way to implement memoisation. For fib-like patterns (only one argument, which is a number) the memoisation as used in the fib-example One could devise a general solution (the hashmaps one) and it will work, but it might also be suboptimal for your specific problem With memoisation you have side-effects, so you probably want the cache to live in the State monad.
However, in general you want to keep your algorithms as pure as possible so if it has recursion, you are already getting in a bit of a mess. This is because you will call the memoised version of the function recursively, but that needs to run in the State monad, so now your whole function has to run in the State monad. This also effects laziness, but you could try the lazy state monad Keeping this in mind: good automatic memoisation is very difficult to achieve.
But you can come a long way easily. Automatically memoising functions probably involves program transformation, where writing things in fix point could go a long way Edit: I am mostly interested in that problem I described, i.e. How to implement such a limit.
Any references to any papers which address this would be very nice Once you have the basic mechanism of memoisation you could tweak the lookup and store functions for you memoising table to implement LRU or some other mechanism of keeping memory consumption small. Maybe you can get the idea for LRU from this C++ example.
Haskell doesn't seem to do automatic memoization. Or do I understand something wrong? No, Haskell doesn't.
However, shared expressions are calculated only once. In the example given by Paul Johnson x is stored on the heap as a thunk. Both y and z can refer to x as x is in scope, and they refer to the same location.
Once x has to be evaluated it will be evaluated only once and only the result of the evaluation will be kept. So this is not really memoisation but it is a result of the implementation. Are there other languages which do automatic (i.e.
Implicit, not explicit) memoization? I've seen the decorator @memoized come along in some python source code. You could of course completely create your own decorator / implementation for it.
Complete with LRU and other policies you want to use. What are common ways to implement memoization? There is no real common way to implement memoisation.
For fib-like patterns (only one argument, which is a number) the memoisation as used in the fib-example One could devise a general solution (the hashmaps one) and it will work, but it might also be suboptimal for your specific problem. With memoisation you have side-effects, so you probably want the cache to live in the State monad. However, in general you want to keep your algorithms as pure as possible so if it has recursion, you are already getting in a bit of a mess.
This is because you will call the memoised version of the function recursively, but that needs to run in the State monad, so now your whole function has to run in the State monad. This also effects laziness, but you could try the lazy state monad. Keeping this in mind: good automatic memoisation is very difficult to achieve.
But you can come a long way easily. Automatically memoising functions probably involves program transformation, where writing things in fix point could go a long way. Edit: I am mostly interested in that problem I described, i.e.
How to implement such a limit. Any references to any papers which address this would be very nice. Once you have the basic mechanism of memoisation you could tweak the lookup and store functions for you memoising table to implement LRU or some other mechanism of keeping memory consumption small.
Maybe you can get the idea for LRU from this C++ example.
Pure memoisation has no side effects; its just the progressive conversion of thunks representing arguments or sets of arguments into concrete values. However Albert is looking for something more like cacheing, where older values are forgotten. This does require side effects of some sort, even though from the caller's point of view the function is still pure.
– Paul Johnson Apr 22 at 7:04 1 The LRU C++ example you linked is interesting. This is finally a real solution -- the first solution I read here in this thread. It is not optimal though because of the ordered map -- a hashmap would fit much better here.
And then you end up exactly already at my own provided solution (see my last link in my question). So I still miss some alternative/better solutions which address also some of the other questions (like dynamically changing the limit, etc. ). – Albert Apr 22 at 8:04 Btw.
, "good automatic memoisation is hard", why is that? It implies that people have tried it and it didn't performed well. I haven't seen such benchmarks.
Can you link to any? – Albert Apr 22 at 11:51 @Paul Johnson: Side-effect in the sense that you update memory and add items to the memoisation table. This might also effect your program, maybe your program breaks because it hugs to much memory due to memoisation.
– Alessandro Vermeulen Apr 22 at 13:32 @Albert, you have a lot of things to consider when you want to do automatic memoisation. Which kind of functions do you allow to be memoised? Do you memoise parameters that are functions?
If you want to memoise with only using Haskell you either need to use a fib-like (safe) pattern or, for more complicated types, use hash maps. But then you cannot piggyback on Haskell's thunk evaluation and you need to manage the map yourself.As far as I can see now that requires either lifting everything to the State Monad or the use of unsafeperform. – Alessandro Vermeulen Apr 22 at 13:37.
I said in a comment that your requirements sounded like garbage collection. I thought this because you are interested in managing a limited pool of memory, purging it once in a while so that it doesn't go over. Now that I think about it, it's more like a virtual memory page replacement algorithm.
You can read that Wikipedia page for various approaches used to solve that kind of problem, such as "not recently used", "aging", "clock", "second chance", etc. However, memoization isn't often done by restricting the results retained; the required mutation for the above-mentioned algorithms is generally un-haskellish. Don't let that discourage you, though. You have interesting ideas that could be valuable additions to the exploration of memoization possibilities in Haskell performed thusfar.
Sometimes a particular memoization problem lends itself well to limited memory. For example, aligning two gene sequences can be done with Dynamic Programming (see Wikipedia's Dynamic Programming # Sequence alignment) with a 2-dimensional memoization table. But since the DP solution for a given cell only depends on the results from the preceding row, you can start from the bottom and discard rows that are more than 1 away from the current row.
Fibonacci numbers are the same: you only need the previous two numbers in the sequence in order to compute the next. You can discard anything earlier if all you are interested in is the nth number. Most memoizations are for the sake of speeding up recursive algorithms where there are shared subproblems.
Many such problems do not have an easy way of sequencing the evaluations in order to throw away the results you no longer need. At that point, you're just guessing, using heuristics (like frequency of use) to determine who gets how much access to the limited resource.
The link to the page replacement algorithms is interesting. Many of it can indeed be applied to memoization. Whereby memoization is still a bit more special.
– Albert Apr 22 at 8:13.
No, Haskell does not do automatic memoisation of functions. What it does is store values, so if you have x = somethingVeryLong and somewhere else in the same scope you have y = f x z = g x then x will only be computed once. This package shows how memoised values can be stored using a variety of keys and look-up tables.
The memoising is typically used within a single invocation of a larger function, so that the memoised values don't hang around forever (which as you say would be a problem). If you want a memoiser that also forgets old values using LRU or something then I think you probably need to put it in a state monad or something; you can't get Haskell to behave like that using the conventional memoising approach.
It seems like the package you linked also ignores the issue about a memory limit. Do you know any code (no matter what language) which doesn't ignore this? – Albert Apr 21 at 21:18.
No exactly an answer, but this page: haskell.org/haskellwiki/Memoization offers ideas about Memoization in Haskell and also shows a list-based implementation of the Fibonacci sequence that may be of interest.
7 I already linked exactly that page in my question. – Albert Apr 21 at 20:45.
For example of implementing automatic memoization, you may look at the Factor programming language and its Memoization vocabulary. For example, simple Fibonacci number generator: : fib ( n -- n ) dup 1 > 1 - fib 2 - fib bi + when ; may be memoized by replacing ":" word with "MEMO:" MEMO: fib ( n -- n ) dup 1 > 1 - fib 2 - fib bi + when ; In that case, fib inputs and corresponding outputs will be transparently stored within in-memory dictionary. Factor language syntax may be confusing :).
I recommend you to watch video presentation from Google Tech Talks about Factor to understand, how is it possible to implement memoization in that way.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.