• Home
  • Raw
  • Download

Lines Matching +full:file +full:- +full:entry +full:- +full:cache

4  * Use of this source code is governed by a BSD-style license that can be
5 * found in the LICENSE file.
19 // Ganesh creates a lot of utility textures (e.g., blurred-rrect masks) that need to be shared
20 // between the direct context and all the DDL recording contexts. This thread-safe cache
23 // In operation, each thread will first check if the threaded cache possesses the required texture.
26 // attempt to add it to the cache. If another thread had added it in the interim, the losing thread
31 // gpu-thread has precedence over the recording threads.
33 // The invariants for this cache differ a bit from those of the proxy and resource caches.
34 // For this cache:
36 // only this cache knows the unique key - neither the proxy nor backing resource should
37 // be discoverable in any other cache by the unique key
38 // if a backing resource resides in the resource cache then there should be an entry in this
39 // cache
40 // an entry in this cache, however, doesn't guarantee that there is a corresponding entry in
41 // the resource cache - although the entry here should be able to generate that entry
47 // all the refs held in this cache to be dropped prior to clearing out the resource cache.
49 // For the size_t-variant of GrContext::purgeUnlockedResources, after an initial attempt
50 // to purge the requested amount of resources fails, uniquely held resources in this cache
51 // will be dropped in LRU to MRU order until the cache is under budget. Note that this
52 // prioritizes the survival of resources in this cache over those just in the resource cache.
54 // For the 'scratchResourcesOnly' variant of GrContext::purgeUnlockedResources, this cache
55 // won't be modified in the scratch-only case unless the resource cache is over budget (in
56 // which case it will purge uniquely-held resources in LRU to MRU order to get
57 // back under budget). In the non-scratch-only case, all uniquely held resources in this cache
58 // will be released prior to the resource cache being cleared out.
60 // For GrContext::setResourceCacheLimit, if an initial pass through the resource cache doesn't
61 // reach the budget, uniquely held resources in this cache will be released in LRU to MRU order.
64 // w/in 'msNotUsed' will be released from this cache prior to the resource cache being cleaned.
78 // Drop uniquely held refs until under the resource cache's budget.
100 // To hold vertex data in the cache and have it transparently transition from cpu-side to
101 // gpu-side while being shared between all the threads we need a ref counted object that
102 // keeps hold of the cpu-side data but allows deferred filling in of the mirroring gpu buffer.
176 // To allow gpu-created resources to have priority, we pre-emptively place a lazy proxy
177 // in the thread-safe cache (with findOrAdd). The Trampoline object allows that lazy proxy to
190 struct Entry { struct
191 Entry(const GrUniqueKey& key, const GrSurfaceProxyView& view) in Entry() function
194 , fTag(Entry::kView) { in Entry()
197 Entry(const GrUniqueKey& key, sk_sp<VertexData> vertData) in Entry() argument
200 , fTag(Entry::kVertData) { in Entry()
203 ~Entry() { in ~Entry() argument
204 this->makeEmpty(); in ~Entry()
210 if (fTag == kView && fView.proxy()->unique()) { in uniquelyHeld()
212 } else if (fTag == kVertData && fVertData->unique()) { in uniquelyHeld()
268 // The thread-safe cache gets to directly manipulate the llist and last-access members
270 SK_DECLARE_INTERNAL_LLIST_INTERFACE(Entry); argument
273 static const GrUniqueKey& GetKey(const Entry& e) { in GetKey() argument
294 void makeExistingEntryMRU(Entry*) SK_REQUIRES(fSpinLock); argument
295 Entry* makeNewEntryMRU(Entry*) SK_REQUIRES(fSpinLock);
297 Entry* getEntry(const GrUniqueKey&, const GrSurfaceProxyView&) SK_REQUIRES(fSpinLock);
298 Entry* getEntry(const GrUniqueKey&, sk_sp<VertexData>) SK_REQUIRES(fSpinLock);
300 void recycleEntry(Entry*) SK_REQUIRES(fSpinLock);
317 SkTDynamicHash<Entry, GrUniqueKey> fUniquelyKeyedEntryMap SK_GUARDED_BY(fSpinLock);
319 SkTInternalLList<Entry> fUniquelyKeyedEntryList SK_GUARDED_BY(fSpinLock);
322 static const int kInitialArenaSize = 64 * sizeof(Entry);
326 Entry* fFreeEntryList SK_GUARDED_BY(fSpinLock);