Lines Matching +full:cache +full:- +full:from
2 :mod:`cachetools` --- Extensible memoizing collections and decorators
11 For the purpose of this module, a *cache* is a mutable_ mapping_ of a
12 fixed maximum size. When the cache is full, i.e. by adding another
13 item the cache would exceed its maximum size, the cache must choose
14 which item(s) to discard based on a suitable `cache algorithm`_. In
15 general, a cache's size is the total size of its items, and an item's
18 item counts as :const:`1`, a cache's size is equal to the number of
19 its items, or ``len(cache)``.
21 Multiple cache classes based on different caching algorithms are
29 from cachetools import cached, cachedmethod, LRUCache, TTLCache
31 from unittest import mock
35 Cache implementations
39 different cache algorithms. All these classes derive from class
40 :class:`Cache`, which in turn derives from
43 of the cache. When a cache is full, :meth:`Cache.__setitem__()` calls
47 :class:`Cache` also features a :meth:`getsizeof` method, which returns
50 making the cache's size equal to the number of its items, or
51 ``len(cache)``. For convenience, all cache classes accept an optional
55 Note that the values of a :class:`Cache` are mutable by default, as
60 computed when the item is inserted into the cache.
64 Please be aware that all these classes are *not* thread-safe.
65 Access to a shared cache from multiple threads must be properly
69 .. autoclass:: Cache(maxsize, getsizeof=None)
109 By default, items are selected from the list of cache keys using
111 an alternative function that returns an arbitrary element from a
112 non-empty sequence.
117 This class associates a time-to-live value with each item. Items
118 that expire because they have exceeded their time-to-live will be
123 By default, the time-to-live is specified in seconds and
129 from datetime import datetime, timedelta
131 cache = TTLCache(maxsize=10, ttl=timedelta(hours=12), timer=datetime.now)
134 expiration time of a cache item, and must be comparable against
139 Expired items will be removed from a cache only at the next
142 Calling this method removes all items whose time-to-live would
149 Extending cache classes
150 -----------------------
152 Sometimes it may be desirable to notice when and what cache items are
153 evicted, i.e. removed from a cache to make room for new items. Since
154 all cache implementations call :meth:`popitem` to evict items from the
155 cache, this can be achieved by overriding this method in a subclass:
173 subclasses of :class:`Cache` may implement a :meth:`__missing__`
174 method which is called by :meth:`Cache.__getitem__` if the requested
183 ... url = 'http://www.python.org/dev/peps/pep-%04d/' % key
186 ... self[key] = pep # store text in cache
195 Note, though, that such a class does not really behave like a *cache*
210 >>> @cached(cache={})
213 ... return n if n < 2 else fib(n - 1) + fib(n - 2)
218 .. decorator:: cached(cache, key=cachetools.keys.hashkey, lock=None)
221 results in a cache.
223 The `cache` argument specifies a cache object to store previous
224 function arguments and return values. Note that `cache` need not
225 be an instance of the cache implementations provided by the
232 and which has to return a suitable cache key. Since caches are
238 cache will then be nested in a ``with lock:`` statement. This can
239 be used for synchronizing thread access to the cache by providing a
245 cache object. The underlying wrapped function will be called
246 outside the `with` statement, and must be thread-safe by itself.
250 This can be used for introspection or for bypassing the cache.
252 To perform operations on the cache object, for example to clear the
253 cache during runtime, the cache should be assigned to a variable.
254 When a `lock` object is used, any access to the cache from outside
260 from cachetools.keys import hashkey
261 from threading import Lock
263 cache = LRUCache(maxsize=32)
266 @cached(cache, key=hashkey, lock=lock)
269 url = 'http://www.python.org/dev/peps/pep-%04d/' % num
273 # make sure access to cache is synchronized
275 cache.clear()
277 # always use the key function for accessing cache items
279 cache.pop(hashkey(42), None)
281 It is also possible to use a single shared cache object with
283 cache keys are generated for each function, even for identical
289 >>> from cachetools.keys import hashkey
290 >>> from functools import partial
292 >>> # shared cache for integer sequences
298 ... return n if n < 2 else fib(n - 1) + fib(n - 2)
303 ... return 2 - n if n < 2 else luc(n - 1) + luc(n - 2)
313 .. decorator:: cachedmethod(cache, key=cachetools.keys.hashkey, lock=None)
316 callable that saves results in a (possibly shared) cache.
319 decorator is that `cache` and `lock` are not passed objects, but
321 for class methods) as their sole argument to retrieve the cache or
327 ``lock(self)`` will only guard access to the cache itself. It
332 function decorator is that cache properties such as `maxsize` can
340 self.cache = LRUCache(maxsize=cachesize)
342 @cachedmethod(operator.attrgetter('cache'))
345 url = 'http://www.python.org/dev/peps/pep-%04d/' % num
359 When using a shared cache for multiple methods, be aware that
360 different cache keys must be created for each method even when
369 self.cache = LRUCache(maxsize=cachesize)
371 @cachedmethod(lambda self: self.cache, key=partial(hashkey, 'pep'))
374 url = 'http://www.python.org/dev/peps/pep-%04d/' % num
378 @cachedmethod(lambda self: self.cache, key=partial(hashkey, 'rfc'))
398 :mod:`cachetools.keys` --- Key functions for memoizing decorators
408 This function returns a :class:`tuple` instance suitable as a cache
414 different types will yield distinct cache keys. For example,
419 functions for handling some non-hashable arguments. For example,
447 :mod:`cachetools.func` --- :func:`functools.lru_cache` compatible decorators
452 To ease migration from (or to) Python 3's :func:`functools.lru_cache`,
457 the caching strategy is effectively disabled and the cache can grow
480 :func:`cache_clear` functions to provide information about cache
481 performance and clear the cache. Please see the
483 all the decorators in this module are thread-safe by default.
526 algorithm with a per-item time-to-live (TTL) value.
531 .. _context manager: http://docs.python.org/dev/glossary.html#term-context-manager
532 .. _mapping: http://docs.python.org/dev/glossary.html#term-mapping
533 .. _mutable: http://docs.python.org/dev/glossary.html#term-mutable