Lines Matching +full:cache +full:- +full:to
2 :mod:`cachetools` --- Extensible memoizing collections and decorators
11 For the purpose of this module, a *cache* is a mutable_ mapping_ of a
12 fixed maximum size. When the cache is full, i.e. by adding another
13 item the cache would exceed its maximum size, the cache must choose
14 which item(s) to discard based on a suitable `cache algorithm`_. In
15 general, a cache's size is the total size of its items, and an item's
18 item counts as :const:`1`, a cache's size is equal to the number of
19 its items, or ``len(cache)``.
21 Multiple cache classes based on different caching algorithms are
35 Cache implementations
39 different cache algorithms. All these classes derive from class
40 :class:`Cache`, which in turn derives from
42 :attr:`currsize` properties to retrieve the maximum and current size
43 of the cache. When a cache is full, :meth:`Cache.__setitem__()` calls
45 item to be added.
47 :class:`Cache` also features a :meth:`getsizeof` method, which returns
50 making the cache's size equal to the number of its items, or
51 ``len(cache)``. For convenience, all cache classes accept an optional
53 of one argument used to retrieve the size of an item's value.
55 Note that the values of a :class:`Cache` are mutable by default, as
57 responsibility to take care that cached values are not accidentally
60 computed when the item is inserted into the cache.
64 Please be aware that all these classes are *not* thread-safe.
65 Access to a shared cache from multiple threads must be properly
69 .. autoclass:: Cache(maxsize, getsizeof=None)
72 This class discards arbitrary items using :meth:`popitem` to make
74 to implement specific caching strategies. If a subclass has to
76 additionally need to override :meth:`__getitem__`,
82 This class evicts items in the order they were added to make space
89 items used least often to make space when necessary.
94 This class discards the least recently used items first to make
100 This class discards the most recently used items first to make
106 This class randomly selects candidate items and discards them to
109 By default, items are selected from the list of cache keys using
112 non-empty sequence.
117 This class associates a time-to-live value with each item. Items
118 that expire because they have exceeded their time-to-live will be
120 expired items are there to remove, the least recently used items
121 will be discarded first to make space when necessary.
123 By default, the time-to-live is specified in seconds and
124 :func:`time.monotonic` is used to retrieve the current time. A
131 cache = TTLCache(maxsize=10, ttl=timedelta(hours=12), timer=datetime.now)
134 expiration time of a cache item, and must be comparable against
139 Expired items will be removed from a cache only at the next
142 Calling this method removes all items whose time-to-live would
143 have expired by `time`, so garbage collection is free to reuse
149 Extending cache classes
150 -----------------------
152 Sometimes it may be desirable to notice when and what cache items are
153 evicted, i.e. removed from a cache to make room for new items. Since
154 all cache implementations call :meth:`popitem` to evict items from the
155 cache, this can be achieved by overriding this method in a subclass:
172 Similar to the standard library's :class:`collections.defaultdict`,
173 subclasses of :class:`Cache` may implement a :meth:`__missing__`
174 method which is called by :meth:`Cache.__getitem__` if the requested
183 ... url = 'http://www.python.org/dev/peps/pep-%04d/' % key
186 ... self[key] = pep # store text in cache
195 Note, though, that such a class does not really behave like a *cache*
196 any more, and will lead to surprising results when used with any of
210 >>> @cached(cache={})
213 ... return n if n < 2 else fib(n - 1) + fib(n - 2)
218 .. decorator:: cached(cache, key=cachetools.keys.hashkey, lock=None)
220 Decorator to wrap a function with a memoizing callable that saves
221 results in a cache.
223 The `cache` argument specifies a cache object to store previous
224 function arguments and return values. Note that `cache` need not
225 be an instance of the cache implementations provided by the
232 and which has to return a suitable cache key. Since caches are
234 default is to call :func:`cachetools.keys.hashkey`.
237 implementing the `context manager`_ protocol. Any access to the
238 cache will then be nested in a ``with lock:`` statement. This can
239 be used for synchronizing thread access to the cache by providing a
244 The `lock` context manager is used only to guard access to the
245 cache object. The underlying wrapped function will be called
246 outside the `with` statement, and must be thread-safe by itself.
250 This can be used for introspection or for bypassing the cache.
252 To perform operations on the cache object, for example to clear the
253 cache during runtime, the cache should be assigned to a variable.
254 When a `lock` object is used, any access to the cache from outside
263 cache = LRUCache(maxsize=32)
266 @cached(cache, key=hashkey, lock=lock)
269 url = 'http://www.python.org/dev/peps/pep-%04d/' % num
273 # make sure access to cache is synchronized
275 cache.clear()
277 # always use the key function for accessing cache items
279 cache.pop(hashkey(42), None)
281 It is also possible to use a single shared cache object with
283 cache keys are generated for each function, even for identical
292 >>> # shared cache for integer sequences
298 ... return n if n < 2 else fib(n - 1) + fib(n - 2)
303 ... return 2 - n if n < 2 else luc(n - 1) + luc(n - 2)
313 .. decorator:: cachedmethod(cache, key=cachetools.keys.hashkey, lock=None)
315 Decorator to wrap a class or instance method with a memoizing
316 callable that saves results in a (possibly shared) cache.
319 decorator is that `cache` and `lock` are not passed objects, but
321 for class methods) as their sole argument to retrieve the cache or
327 ``lock(self)`` will only guard access to the cache itself. It
328 is the user's responsibility to handle concurrent calls to the
332 function decorator is that cache properties such as `maxsize` can
340 self.cache = LRUCache(maxsize=cachesize)
342 @cachedmethod(operator.attrgetter('cache'))
345 url = 'http://www.python.org/dev/peps/pep-%04d/' % num
359 When using a shared cache for multiple methods, be aware that
360 different cache keys must be created for each method even when
369 self.cache = LRUCache(maxsize=cachesize)
371 @cachedmethod(lambda self: self.cache, key=partial(hashkey, 'pep'))
374 url = 'http://www.python.org/dev/peps/pep-%04d/' % num
378 @cachedmethod(lambda self: self.cache, key=partial(hashkey, 'rfc'))
398 :mod:`cachetools.keys` --- Key functions for memoizing decorators
408 This function returns a :class:`tuple` instance suitable as a cache
413 This function is similar to :func:`hashkey`, but arguments of
414 different types will yield distinct cache keys. For example,
419 functions for handling some non-hashable arguments. For example,
447 :mod:`cachetools.func` --- :func:`functools.lru_cache` compatible decorators
452 To ease migration from (or to) Python 3's :func:`functools.lru_cache`,
455 callable that saves up to the `maxsize` most recent calls, using
456 different caching strategies. If `maxsize` is set to :const:`None`,
457 the caching strategy is effectively disabled and the cache can grow
460 If the optional argument `typed` is set to :const:`True`, function
466 This allows the decorator to be applied directly to a user function,
480 :func:`cache_clear` functions to provide information about cache
481 performance and clear the cache. Please see the
483 all the decorators in this module are thread-safe by default.
490 saves up to `maxsize` results based on a First In First Out
497 saves up to `maxsize` results based on a Least Frequently Used
504 saves up to `maxsize` results based on a Least Recently Used (LRU)
511 saves up to `maxsize` results based on a Most Recently Used (MRU)
518 saves up to `maxsize` results based on a Random Replacement (RR)
524 Decorator to wrap a function with a memoizing callable that saves
525 up to `maxsize` results based on a Least Recently Used (LRU)
526 algorithm with a per-item time-to-live (TTL) value.
531 .. _context manager: http://docs.python.org/dev/glossary.html#term-context-manager
532 .. _mapping: http://docs.python.org/dev/glossary.html#term-mapping
533 .. _mutable: http://docs.python.org/dev/glossary.html#term-mutable