• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 // Copyright (c) 2011 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file.
4 
5 #ifndef BASE_TRACKED_OBJECTS_H_
6 #define BASE_TRACKED_OBJECTS_H_
7 #pragma once
8 
9 #include <map>
10 #include <string>
11 #include <vector>
12 
13 #include "base/base_api.h"
14 #include "base/synchronization/lock.h"
15 #include "base/tracked.h"
16 #include "base/threading/thread_local_storage.h"
17 
18 // TrackedObjects provides a database of stats about objects (generally Tasks)
19 // that are tracked.  Tracking means their birth, death, duration, birth thread,
20 // death thread, and birth place are recorded.  This data is carefully spread
21 // across a series of objects so that the counts and times can be rapidly
22 // updated without (usually) having to lock the data, and hence there is usually
23 // very little contention caused by the tracking.  The data can be viewed via
24 // the about:tasks URL, with a variety of sorting and filtering choices.
25 //
26 // These classes serve as the basis of a profiler of sorts for the Tasks system.
27 // As a result, design decisions were made to maximize speed, by minimizing
28 // recurring allocation/deallocation, lock contention and data copying.  In the
29 // "stable" state, which is reached relatively quickly, there is no separate
30 // marginal allocation cost associated with construction or destruction of
31 // tracked objects, no locks are generally employed, and probably the largest
32 // computational cost is associated with obtaining start and stop times for
33 // instances as they are created and destroyed.  The introduction of worker
34 // threads had a slight impact on this approach, and required use of some locks
35 // when accessing data from the worker threads.
36 //
37 // The following describes the lifecycle of tracking an instance.
38 //
39 // First off, when the instance is created, the FROM_HERE macro is expanded
40 // to specify the birth place (file, line, function) where the instance was
41 // created.  That data is used to create a transient Location instance
42 // encapsulating the above triple of information.  The strings (like __FILE__)
43 // are passed around by reference, with the assumption that they are static, and
44 // will never go away.  This ensures that the strings can be dealt with as atoms
45 // with great efficiency (i.e., copying of strings is never needed, and
46 // comparisons for equality can be based on pointer comparisons).
47 //
48 // Next, a Births instance is created for use ONLY on the thread where this
49 // instance was created.  That Births instance records (in a base class
50 // BirthOnThread) references to the static data provided in a Location instance,
51 // as well as a pointer specifying the thread on which the birth takes place.
52 // Hence there is at most one Births instance for each Location on each thread.
53 // The derived Births class contains slots for recording statistics about all
54 // instances born at the same location.  Statistics currently include only the
55 // count of instances constructed.
56 // Since the base class BirthOnThread contains only constant data, it can be
57 // freely accessed by any thread at any time (i.e., only the statistic needs to
58 // be handled carefully, and it is ONLY read or written by the birth thread).
59 //
60 // Having now either constructed or found the Births instance described above, a
61 // pointer to the Births instance is then embedded in a base class of the
62 // instance we're tracking (usually a Task). This fact alone is very useful in
63 // debugging, when there is a question of where an instance came from.  In
64 // addition, the birth time is also embedded in the base class Tracked (see
65 // tracked.h), and used to later evaluate the lifetime duration.
66 // As a result of the above embedding, we can (for any tracked instance) find
67 // out its location of birth, and thread of birth, without using any locks, as
68 // all that data is constant across the life of the process.
69 //
70 // The amount of memory used in the above data structures depends on how many
71 // threads there are, and how many Locations of construction there are.
72 // Fortunately, we don't use memory that is the product of those two counts, but
73 // rather we only need one Births instance for each thread that constructs an
74 // instance at a Location. In many cases, instances (such as Tasks) are only
75 // created on one thread, so the memory utilization is actually fairly
76 // restrained.
77 //
78 // Lastly, when an instance is deleted, the final tallies of statistics are
79 // carefully accumulated.  That tallying wrties into slots (members) in a
80 // collection of DeathData instances.  For each birth place Location that is
81 // destroyed on a thread, there is a DeathData instance to record the additional
82 // death count, as well as accumulate the lifetime duration of the instance as
83 // it is destroyed (dies).  By maintaining a single place to aggregate this
84 // addition *only* for the given thread, we avoid the need to lock such
85 // DeathData instances.
86 //
87 // With the above lifecycle description complete, the major remaining detail is
88 // explaining how each thread maintains a list of DeathData instances, and of
89 // Births instances, and is able to avoid additional (redundant/unnecessary)
90 // allocations.
91 //
92 // Each thread maintains a list of data items specific to that thread in a
93 // ThreadData instance (for that specific thread only).  The two critical items
94 // are lists of DeathData and Births instances.  These lists are maintained in
95 // STL maps, which are indexed by Location. As noted earlier, we can compare
96 // locations very efficiently as we consider the underlying data (file,
97 // function, line) to be atoms, and hence pointer comparison is used rather than
98 // (slow) string comparisons.
99 //
100 // To provide a mechanism for iterating over all "known threads," which means
101 // threads that have recorded a birth or a death, we create a singly linked list
102 // of ThreadData instances. Each such instance maintains a pointer to the next
103 // one.  A static member of ThreadData provides a pointer to the first_ item on
104 // this global list, and access to that first_ item requires the use of a lock_.
105 // When new ThreadData instances is added to the global list, it is pre-pended,
106 // which ensures that any prior acquisition of the list is valid (i.e., the
107 // holder can iterate over it without fear of it changing, or the necessity of
108 // using an additional lock.  Iterations are actually pretty rare (used
109 // primarilly for cleanup, or snapshotting data for display), so this lock has
110 // very little global performance impact.
111 //
112 // The above description tries to define the high performance (run time)
113 // portions of these classes.  After gathering statistics, calls instigated
114 // by visiting about:tasks will assemble and aggregate data for display. The
115 // following data structures are used for producing such displays.  They are
116 // not performance critical, and their only major constraint is that they should
117 // be able to run concurrently with ongoing augmentation of the birth and death
118 // data.
119 //
120 // For a given birth location, information about births are spread across data
121 // structures that are asynchronously changing on various threads.  For display
122 // purposes, we need to construct Snapshot instances for each combination of
123 // birth thread, death thread, and location, along with the count of such
124 // lifetimes.  We gather such data into a Snapshot instances, so that such
125 // instances can be sorted and aggregated (and remain frozen during our
126 // processing).  Snapshot instances use pointers to constant portions of the
127 // birth and death datastructures, but have local (frozen) copies of the actual
128 // statistics (birth count, durations, etc. etc.).
129 //
130 // A DataCollector is a container object that holds a set of Snapshots.  A
131 // DataCollector can be passed from thread to thread, and each thread
132 // contributes to it by adding or updating Snapshot instances.  DataCollector
133 // instances are thread safe containers which are passed to various threads to
134 // accumulate all Snapshot instances.
135 //
136 // After an array of Snapshots instances are colleted into a DataCollector, they
137 // need to be sorted, and possibly aggregated (example: how many threads are in
138 // a specific consecutive set of Snapshots?  What was the total birth count for
139 // that set? etc.).  Aggregation instances collect running sums of any set of
140 // snapshot instances, and are used to print sub-totals in an about:tasks page.
141 //
142 // TODO(jar): I need to store DataCollections, and provide facilities for taking
143 // the difference between two gathered DataCollections.  For now, I'm just
144 // adding a hack that Reset()'s to zero all counts and stats.  This is also
145 // done in a slighly thread-unsafe fashion, as the reseting is done
146 // asynchronously relative to ongoing updates, and worse yet, some data fields
147 // are 64bit quantities, and are not atomicly accessed (reset or incremented
148 // etc.).  For basic profiling, this will work "most of the time," and should be
149 // sufficient... but storing away DataCollections is the "right way" to do this.
150 //
151 class MessageLoop;
152 
153 
154 namespace tracked_objects {
155 
156 //------------------------------------------------------------------------------
157 // For a specific thread, and a specific birth place, the collection of all
158 // death info (with tallies for each death thread, to prevent access conflicts).
159 class ThreadData;
160 class BASE_API BirthOnThread {
161  public:
162   explicit BirthOnThread(const Location& location);
163 
location()164   const Location location() const { return location_; }
birth_thread()165   const ThreadData* birth_thread() const { return birth_thread_; }
166 
167  private:
168   // File/lineno of birth.  This defines the essence of the type, as the context
169   // of the birth (construction) often tell what the item is for.  This field
170   // is const, and hence safe to access from any thread.
171   const Location location_;
172 
173   // The thread that records births into this object.  Only this thread is
174   // allowed to access birth_count_ (which changes over time).
175   const ThreadData* birth_thread_;  // The thread this birth took place on.
176 
177   DISALLOW_COPY_AND_ASSIGN(BirthOnThread);
178 };
179 
180 //------------------------------------------------------------------------------
181 // A class for accumulating counts of births (without bothering with a map<>).
182 
183 class BASE_API Births: public BirthOnThread {
184  public:
185   explicit Births(const Location& location);
186 
birth_count()187   int birth_count() const { return birth_count_; }
188 
189   // When we have a birth we update the count for this BirhPLace.
RecordBirth()190   void RecordBirth() { ++birth_count_; }
191 
192   // When a birthplace is changed (updated), we need to decrement the counter
193   // for the old instance.
ForgetBirth()194   void ForgetBirth() { --birth_count_; }  // We corrected a birth place.
195 
196   // Hack to quickly reset all counts to zero.
Clear()197   void Clear() { birth_count_ = 0; }
198 
199  private:
200   // The number of births on this thread for our location_.
201   int birth_count_;
202 
203   DISALLOW_COPY_AND_ASSIGN(Births);
204 };
205 
206 //------------------------------------------------------------------------------
207 // Basic info summarizing multiple destructions of an object with a single
208 // birthplace (fixed Location).  Used both on specific threads, and also used
209 // in snapshots when integrating assembled data.
210 
211 class BASE_API DeathData {
212  public:
213   // Default initializer.
DeathData()214   DeathData() : count_(0), square_duration_(0) {}
215 
216   // When deaths have not yet taken place, and we gather data from all the
217   // threads, we create DeathData stats that tally the number of births without
218   // a corrosponding death.
DeathData(int count)219   explicit DeathData(int count) : count_(count), square_duration_(0) {}
220 
221   void RecordDeath(const base::TimeDelta& duration);
222 
223   // Metrics accessors.
count()224   int count() const { return count_; }
life_duration()225   base::TimeDelta life_duration() const { return life_duration_; }
square_duration()226   int64 square_duration() const { return square_duration_; }
227   int AverageMsDuration() const;
228   double StandardDeviation() const;
229 
230   // Accumulate metrics from other into this.
231   void AddDeathData(const DeathData& other);
232 
233   // Simple print of internal state.
234   void Write(std::string* output) const;
235 
236   // Reset all tallies to zero.
237   void Clear();
238 
239  private:
240   int count_;                // Number of destructions.
241   base::TimeDelta life_duration_;    // Sum of all lifetime durations.
242   int64 square_duration_;  // Sum of squares in milliseconds.
243 };
244 
245 //------------------------------------------------------------------------------
246 // A temporary collection of data that can be sorted and summarized.  It is
247 // gathered (carefully) from many threads.  Instances are held in arrays and
248 // processed, filtered, and rendered.
249 // The source of this data was collected on many threads, and is asynchronously
250 // changing.  The data in this instance is not asynchronously changing.
251 
252 class BASE_API Snapshot {
253  public:
254   // When snapshotting a full life cycle set (birth-to-death), use this:
255   Snapshot(const BirthOnThread& birth_on_thread, const ThreadData& death_thread,
256            const DeathData& death_data);
257 
258   // When snapshotting a birth, with no death yet, use this:
259   Snapshot(const BirthOnThread& birth_on_thread, int count);
260 
261 
birth_thread()262   const ThreadData* birth_thread() const { return birth_->birth_thread(); }
location()263   const Location location() const { return birth_->location(); }
birth()264   const BirthOnThread& birth() const { return *birth_; }
death_thread()265   const ThreadData* death_thread() const {return death_thread_; }
death_data()266   const DeathData& death_data() const { return death_data_; }
267   const std::string DeathThreadName() const;
268 
count()269   int count() const { return death_data_.count(); }
life_duration()270   base::TimeDelta life_duration() const { return death_data_.life_duration(); }
square_duration()271   int64 square_duration() const { return death_data_.square_duration(); }
AverageMsDuration()272   int AverageMsDuration() const { return death_data_.AverageMsDuration(); }
273 
274   void Write(std::string* output) const;
275 
276   void Add(const Snapshot& other);
277 
278  private:
279   const BirthOnThread* birth_;  // Includes Location and birth_thread.
280   const ThreadData* death_thread_;
281   DeathData death_data_;
282 };
283 //------------------------------------------------------------------------------
284 // DataCollector is a container class for Snapshot and BirthOnThread count
285 // items.  It protects the gathering under locks, so that it could be called via
286 // Posttask on any threads, or passed to all the target threads in parallel.
287 
288 class BASE_API DataCollector {
289  public:
290   typedef std::vector<Snapshot> Collection;
291 
292   // Construct with a list of how many threads should contribute.  This helps us
293   // determine (in the async case) when we are done with all contributions.
294   DataCollector();
295   ~DataCollector();
296 
297   // Add all stats from the indicated thread into our arrays.  This function is
298   // mutex protected, and *could* be called from any threads (although current
299   // implementation serialized calls to Append).
300   void Append(const ThreadData& thread_data);
301 
302   // After the accumulation phase, the following accessor is used to process the
303   // data.
304   Collection* collection();
305 
306   // After collection of death data is complete, we can add entries for all the
307   // remaining living objects.
308   void AddListOfLivingObjects();
309 
310  private:
311   typedef std::map<const BirthOnThread*, int> BirthCount;
312 
313   // This instance may be provided to several threads to contribute data.  The
314   // following counter tracks how many more threads will contribute.  When it is
315   // zero, then all asynchronous contributions are complete, and locked access
316   // is no longer needed.
317   int count_of_contributing_threads_;
318 
319   // The array that we collect data into.
320   Collection collection_;
321 
322   // The total number of births recorded at each location for which we have not
323   // seen a death count.
324   BirthCount global_birth_count_;
325 
326   base::Lock accumulation_lock_;  // Protects access during accumulation phase.
327 
328   DISALLOW_COPY_AND_ASSIGN(DataCollector);
329 };
330 
331 //------------------------------------------------------------------------------
332 // Aggregation contains summaries (totals and subtotals) of groups of Snapshot
333 // instances to provide printing of these collections on a single line.
334 
335 class BASE_API Aggregation: public DeathData {
336  public:
337   Aggregation();
338   ~Aggregation();
339 
340   void AddDeathSnapshot(const Snapshot& snapshot);
341   void AddBirths(const Births& births);
342   void AddBirth(const BirthOnThread& birth);
343   void AddBirthPlace(const Location& location);
344   void Write(std::string* output) const;
345   void Clear();
346 
347  private:
348   int birth_count_;
349   std::map<std::string, int> birth_files_;
350   std::map<Location, int> locations_;
351   std::map<const ThreadData*, int> birth_threads_;
352   DeathData death_data_;
353   std::map<const ThreadData*, int> death_threads_;
354 
355   DISALLOW_COPY_AND_ASSIGN(Aggregation);
356 };
357 
358 //------------------------------------------------------------------------------
359 // Comparator is a class that supports the comparison of Snapshot instances.
360 // An instance is actually a list of chained Comparitors, that can provide for
361 // arbitrary ordering.  The path portion of an about:tasks URL is translated
362 // into such a chain, which is then used to order Snapshot instances in a
363 // vector.  It orders them into groups (for aggregation), and can also order
364 // instances within the groups (for detailed rendering of the instances in an
365 // aggregation).
366 
367 class BASE_API Comparator {
368  public:
369   // Selector enum is the token identifier for each parsed keyword, most of
370   // which specify a sort order.
371   // Since it is not meaningful to sort more than once on a specific key, we
372   // use bitfields to accumulate what we have sorted on so far.
373   enum Selector {
374     // Sort orders.
375     NIL = 0,
376     BIRTH_THREAD = 1,
377     DEATH_THREAD = 2,
378     BIRTH_FILE = 4,
379     BIRTH_FUNCTION = 8,
380     BIRTH_LINE = 16,
381     COUNT = 32,
382     AVERAGE_DURATION = 64,
383     TOTAL_DURATION = 128,
384 
385     // Imediate action keywords.
386     RESET_ALL_DATA = -1,
387   };
388 
389   explicit Comparator();
390 
391   // Reset the comparator to a NIL selector.  Clear() and recursively delete any
392   // tiebreaker_ entries.  NOTE: We can't use a standard destructor, because
393   // the sort algorithm makes copies of this object, and then deletes them,
394   // which would cause problems (either we'd make expensive deep copies, or we'd
395   // do more thna one delete on a tiebreaker_.
396   void Clear();
397 
398   // The less() operator for sorting the array via std::sort().
399   bool operator()(const Snapshot& left, const Snapshot& right) const;
400 
401   void Sort(DataCollector::Collection* collection) const;
402 
403   // Check to see if the items are sort equivalents (should be aggregated).
404   bool Equivalent(const Snapshot& left, const Snapshot& right) const;
405 
406   // Check to see if all required fields are present in the given sample.
407   bool Acceptable(const Snapshot& sample) const;
408 
409   // A comparator can be refined by specifying what to do if the selected basis
410   // for comparison is insufficient to establish an ordering.  This call adds
411   // the indicated attribute as the new "least significant" basis of comparison.
412   void SetTiebreaker(Selector selector, const std::string& required);
413 
414   // Indicate if this instance is set up to sort by the given Selector, thereby
415   // putting that information in the SortGrouping, so it is not needed in each
416   // printed line.
417   bool IsGroupedBy(Selector selector) const;
418 
419   // Using the tiebreakers as set above, we mostly get an ordering, which
420   // equivalent groups.  If those groups are displayed (rather than just being
421   // aggregated, then the following is used to order them (within the group).
422   void SetSubgroupTiebreaker(Selector selector);
423 
424   // Translate a keyword and restriction in URL path to a selector for sorting.
425   void ParseKeyphrase(const std::string& key_phrase);
426 
427   // Parse a query in an about:tasks URL to decide on sort ordering.
428   bool ParseQuery(const std::string& query);
429 
430   // Output a header line that can be used to indicated what items will be
431   // collected in the group.  It lists all (potentially) tested attributes and
432   // their values (in the sample item).
433   bool WriteSortGrouping(const Snapshot& sample, std::string* output) const;
434 
435   // Output a sample, with SortGroup details not displayed.
436   void WriteSnapshot(const Snapshot& sample, std::string* output) const;
437 
438  private:
439   // The selector directs this instance to compare based on the specified
440   // members of the tested elements.
441   enum Selector selector_;
442 
443   // For filtering into acceptable and unacceptable snapshot instance, the
444   // following is required to be a substring of the selector_ field.
445   std::string required_;
446 
447   // If this instance can't decide on an ordering, we can consult a tie-breaker
448   // which may have a different basis of comparison.
449   Comparator* tiebreaker_;
450 
451   // We or together all the selectors we sort on (not counting sub-group
452   // selectors), so that we can tell if we've decided to group on any given
453   // criteria.
454   int combined_selectors_;
455 
456   // Some tiebreakrs are for subgroup ordering, and not for basic ordering (in
457   // preparation for aggregation).  The subgroup tiebreakers are not consulted
458   // when deciding if two items are in equivalent groups.  This flag tells us
459   // to ignore the tiebreaker when doing Equivalent() testing.
460   bool use_tiebreaker_for_sort_only_;
461 };
462 
463 
464 //------------------------------------------------------------------------------
465 // For each thread, we have a ThreadData that stores all tracking info generated
466 // on this thread.  This prevents the need for locking as data accumulates.
467 
468 class BASE_API ThreadData {
469  public:
470   typedef std::map<Location, Births*> BirthMap;
471   typedef std::map<const Births*, DeathData> DeathMap;
472 
473   ThreadData();
474   ~ThreadData();
475 
476   // Using Thread Local Store, find the current instance for collecting data.
477   // If an instance does not exist, construct one (and remember it for use on
478   // this thread.
479   // If shutdown has already started, and we don't yet have an instance, then
480   // return null.
481   static ThreadData* current();
482 
483   // For a given about:tasks URL, develop resulting HTML, and append to output.
484   static void WriteHTML(const std::string& query, std::string* output);
485 
486   // For a given accumulated array of results, use the comparator to sort and
487   // subtotal, writing the results to the output.
488   static void WriteHTMLTotalAndSubtotals(
489       const DataCollector::Collection& match_array,
490       const Comparator& comparator, std::string* output);
491 
492   // In this thread's data, record a new birth.
493   Births* TallyABirth(const Location& location);
494 
495   // Find a place to record a death on this thread.
496   void TallyADeath(const Births& lifetimes, const base::TimeDelta& duration);
497 
498   // (Thread safe) Get start of list of instances.
499   static ThreadData* first();
500   // Iterate through the null terminated list of instances.
next()501   ThreadData* next() const { return next_; }
502 
message_loop()503   MessageLoop* message_loop() const { return message_loop_; }
504   const std::string ThreadName() const;
505 
506   // Using our lock, make a copy of the specified maps.  These calls may arrive
507   // from non-local threads, and are used to quickly scan data from all threads
508   // in order to build an HTML page for about:tasks.
509   void SnapshotBirthMap(BirthMap *output) const;
510   void SnapshotDeathMap(DeathMap *output) const;
511 
512   // Hack: asynchronously clear all birth counts and death tallies data values
513   // in all ThreadData instances.  The numerical (zeroing) part is done without
514   // use of a locks or atomics exchanges, and may (for int64 values) produce
515   // bogus counts VERY rarely.
516   static void ResetAllThreadData();
517 
518   // Using our lock to protect the iteration, Clear all birth and death data.
519   void Reset();
520 
521   // Using the "known list of threads" gathered during births and deaths, the
522   // following attempts to run the given function once all all such threads.
523   // Note that the function can only be run on threads which have a message
524   // loop!
525   static void RunOnAllThreads(void (*Func)());
526 
527   // Set internal status_ to either become ACTIVE, or later, to be SHUTDOWN,
528   // based on argument being true or false respectively.
529   // IF tracking is not compiled in, this function will return false.
530   static bool StartTracking(bool status);
531   static bool IsActive();
532 
533 #ifdef OS_WIN
534   // WARNING: ONLY call this function when all MessageLoops are still intact for
535   // all registered threads.  IF you call it later, you will crash.
536   // Note: You don't need to call it at all, and you can wait till you are
537   // single threaded (again) to do the cleanup via
538   // ShutdownSingleThreadedCleanup().
539   // Start the teardown (shutdown) process in a multi-thread mode by disabling
540   // further additions to thread database on all threads.  First it makes a
541   // local (locked) change to prevent any more threads from registering.  Then
542   // it Posts a Task to all registered threads to be sure they are aware that no
543   // more accumulation can take place.
544   static void ShutdownMultiThreadTracking();
545 #endif
546 
547   // WARNING: ONLY call this function when you are running single threaded
548   // (again) and all message loops and threads have terminated.  Until that
549   // point some threads may still attempt to write into our data structures.
550   // Delete recursively all data structures, starting with the list of
551   // ThreadData instances.
552   static void ShutdownSingleThreadedCleanup();
553 
554  private:
555   // Current allowable states of the tracking system.  The states always
556   // proceed towards SHUTDOWN, and never go backwards.
557   enum Status {
558     UNINITIALIZED,
559     ACTIVE,
560     SHUTDOWN,
561   };
562 
563 #if defined(OS_WIN)
564   class ThreadSafeDownCounter;
565   class RunTheStatic;
566 #endif
567 
568   // Each registered thread is called to set status_ to SHUTDOWN.
569   // This is done redundantly on every registered thread because it is not
570   // protected by a mutex.  Running on all threads guarantees we get the
571   // notification into the memory cache of all possible threads.
572   static void ShutdownDisablingFurtherTracking();
573 
574   // We use thread local store to identify which ThreadData to interact with.
575   static base::ThreadLocalStorage::Slot tls_index_;
576 
577   // Link to the most recently created instance (starts a null terminated list).
578   static ThreadData* first_;
579   // Protection for access to first_.
580   static base::Lock list_lock_;
581 
582   // We set status_ to SHUTDOWN when we shut down the tracking service. This
583   // setting is redundantly established by all participating threads so that we
584   // are *guaranteed* (without locking) that all threads can "see" the status
585   // and avoid additional calls into the  service.
586   static Status status_;
587 
588   // Link to next instance (null terminated list). Used to globally track all
589   // registered instances (corresponds to all registered threads where we keep
590   // data).
591   ThreadData* next_;
592 
593   // The message loop where tasks needing to access this instance's private data
594   // should be directed.  Since some threads have no message loop, some
595   // instances have data that can't be (safely) modified externally.
596   MessageLoop* message_loop_;
597 
598   // A map used on each thread to keep track of Births on this thread.
599   // This map should only be accessed on the thread it was constructed on.
600   // When a snapshot is needed, this structure can be locked in place for the
601   // duration of the snapshotting activity.
602   BirthMap birth_map_;
603 
604   // Similar to birth_map_, this records informations about death of tracked
605   // instances (i.e., when a tracked instance was destroyed on this thread).
606   // It is locked before changing, and hence other threads may access it by
607   // locking before reading it.
608   DeathMap death_map_;
609 
610   // Lock to protect *some* access to BirthMap and DeathMap.  The maps are
611   // regularly read and written on this thread, but may only be read from other
612   // threads.  To support this, we acquire this lock if we are writing from this
613   // thread, or reading from another thread.  For reading from this thread we
614   // don't need a lock, as there is no potential for a conflict since the
615   // writing is only done from this thread.
616   mutable base::Lock lock_;
617 
618   DISALLOW_COPY_AND_ASSIGN(ThreadData);
619 };
620 
621 
622 //------------------------------------------------------------------------------
623 // Provide simple way to to start global tracking, and to tear down tracking
624 // when done.  Note that construction and destruction of this object must be
625 // done when running in  threaded mode (before spawning a lot of threads
626 // for construction, and after shutting down all the threads for destruction).
627 
628 // To prevent grabbing thread local store resources time and again if someone
629 // chooses to try to re-run the browser many times, we maintain global state and
630 // only allow the tracking system to be started up at most once, and shutdown
631 // at most once.  See bug 31344 for an example.
632 
633 class AutoTracking {
634  public:
AutoTracking()635   AutoTracking() {
636     if (state_ != kNeverBeenRun)
637       return;
638     ThreadData::StartTracking(true);
639     state_ = kRunning;
640   }
641 
~AutoTracking()642   ~AutoTracking() {
643 #ifndef NDEBUG
644     if (state_ != kRunning)
645       return;
646     // We don't do cleanup of any sort in Release build because it is a
647     // complete waste of time.  Since Chromium doesn't join all its thread and
648     // guarantee we're in a single threaded mode, we don't even do cleanup in
649     // debug mode, as it will generate race-checker warnings.
650 #endif
651   }
652 
653  private:
654   enum State {
655     kNeverBeenRun,
656     kRunning,
657     kTornDownAndStopped,
658   };
659   static State state_;
660 
661   DISALLOW_COPY_AND_ASSIGN(AutoTracking);
662 };
663 
664 
665 }  // namespace tracked_objects
666 
667 #endif  // BASE_TRACKED_OBJECTS_H_
668