• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1NOTES ON DICTIONARIES
2================================
3
4Principal Use Cases for Dictionaries
5------------------------------------
6
7Passing keyword arguments
8    Typically, one read and one write for 1 to 3 elements.
9    Occurs frequently in normal python code.
10
11Class method lookup
12    Dictionaries vary in size with 8 to 16 elements being common.
13    Usually written once with many lookups.
14    When base classes are used, there are many failed lookups
15        followed by a lookup in a base class.
16
17Instance attribute lookup and Global variables
18    Dictionaries vary in size.  4 to 10 elements are common.
19    Both reads and writes are common.
20
21Builtins
22    Frequent reads.  Almost never written.
23    About 150 interned strings (as of Py3.3).
24    A few keys are accessed much more frequently than others.
25
26Uniquification
27    Dictionaries of any size.  Bulk of work is in creation.
28    Repeated writes to a smaller set of keys.
29    Single read of each key.
30    Some use cases have two consecutive accesses to the same key.
31
32    * Removing duplicates from a sequence.
33        dict.fromkeys(seqn).keys()
34
35    * Counting elements in a sequence.
36        for e in seqn:
37          d[e] = d.get(e,0) + 1
38
39    * Accumulating references in a dictionary of lists:
40
41        for pagenumber, page in enumerate(pages):
42          for word in page:
43            d.setdefault(word, []).append(pagenumber)
44
45    Note, the second example is a use case characterized by a get and set
46    to the same key.  There are similar use cases with a __contains__
47    followed by a get, set, or del to the same key.  Part of the
48    justification for d.setdefault is combining the two lookups into one.
49
50Membership Testing
51    Dictionaries of any size.  Created once and then rarely changes.
52    Single write to each key.
53    Many calls to __contains__() or has_key().
54    Similar access patterns occur with replacement dictionaries
55        such as with the % formatting operator.
56
57Dynamic Mappings
58    Characterized by deletions interspersed with adds and replacements.
59    Performance benefits greatly from the re-use of dummy entries.
60
61Data Layout
62-----------
63
64Dictionaries are composed of 3 components:
65The dictobject struct itself
66A dict-keys object (keys & hashes)
67A values array
68
69
70Tunable Dictionary Parameters
71-----------------------------
72
73See comments for PyDict_MINSIZE_SPLIT, PyDict_MINSIZE_COMBINED,
74USABLE_FRACTION and GROWTH_RATE in dictobject.c
75
76Tune-ups should be measured across a broad range of applications and
77use cases.  A change to any parameter will help in some situations and
78hurt in others.  The key is to find settings that help the most common
79cases and do the least damage to the less common cases.  Results will
80vary dramatically depending on the exact number of keys, whether the
81keys are all strings, whether reads or writes dominate, the exact
82hash values of the keys (some sets of values have fewer collisions than
83others).  Any one test or benchmark is likely to prove misleading.
84
85While making a dictionary more sparse reduces collisions, it impairs
86iteration and key listing.  Those methods loop over every potential
87entry.  Doubling the size of dictionary results in twice as many
88non-overlapping memory accesses for keys(), items(), values(),
89__iter__(), iterkeys(), iteritems(), itervalues(), and update().
90Also, every dictionary iterates at least twice, once for the memset()
91when it is created and once by dealloc().
92
93Dictionary operations involving only a single key can be O(1) unless
94resizing is possible.  By checking for a resize only when the
95dictionary can grow (and may *require* resizing), other operations
96remain O(1), and the odds of resize thrashing or memory fragmentation
97are reduced. In particular, an algorithm that empties a dictionary
98by repeatedly invoking .pop will see no resizing, which might
99not be necessary at all because the dictionary is eventually
100discarded entirely.
101
102The key differences between this implementation and earlier versions are:
103    1. The table can be split into two parts, the keys and the values.
104
105    2. There is an additional key-value combination: (key, NULL).
106       Unlike (<dummy>, NULL) which represents a deleted value, (key, NULL)
107       represented a yet to be inserted value. This combination can only occur
108       when the table is split.
109
110    3. No small table embedded in the dict,
111       as this would make sharing of key-tables impossible.
112
113
114These changes have the following consequences.
115   1. General dictionaries are slightly larger.
116
117   2. All object dictionaries of a single class can share a single key-table,
118      saving about 60% memory for such cases.
119
120Results of Cache Locality Experiments
121--------------------------------------
122
123Experiments on an earlier design of dictionary, in which all tables were
124combined, showed the following:
125
126  When an entry is retrieved from memory, several adjacent entries are also
127  retrieved into a cache line.  Since accessing items in cache is *much*
128  cheaper than a cache miss, an enticing idea is to probe the adjacent
129  entries as a first step in collision resolution.  Unfortunately, the
130  introduction of any regularity into collision searches results in more
131  collisions than the current random chaining approach.
132
133  Exploiting cache locality at the expense of additional collisions fails
134  to payoff when the entries are already loaded in cache (the expense
135  is paid with no compensating benefit).  This occurs in small dictionaries
136  where the whole dictionary fits into a pair of cache lines.  It also
137  occurs frequently in large dictionaries which have a common access pattern
138  where some keys are accessed much more frequently than others.  The
139  more popular entries *and* their collision chains tend to remain in cache.
140
141  To exploit cache locality, change the collision resolution section
142  in lookdict() and lookdict_string().  Set i^=1 at the top of the
143  loop and move the  i = (i << 2) + i + perturb + 1 to an unrolled
144  version of the loop.
145
146For split tables, the above will apply to the keys, but the value will
147always be in a different cache line from the key.
148
149
150