• Home
  • Raw
  • Download

Lines Matching full:migration

3  * Memory Migration functionality - linux/mm/migrate.c
7 * Page migration was first developed in the context of the memory hotplug
8 * project. The main authors of the migration code are:
111 * compaction threads can race against page migration functions in isolate_movable_page()
115 * being (wrongly) re-isolated while it is under migration, in isolate_movable_page()
163 * from where they were once taken off for compaction/migration.
203 * Restore a potential migration pte to a working pte entry
227 /* PMD-mapped THP migration entry */ in remove_migration_pte()
241 * Recheck VMA as permissions can change since migration started in remove_migration_pte()
291 * Get rid of all migration entries and replace them by
308 * Something used the pte of a page under migration. We need to
309 * get to the page and wait until migration is finished.
332 * Once page cache replacement of page migration started, page_count in __migration_entry_wait()
690 * Migration functions
750 * async migration. Release the taken locks in buffer_migrate_lock_buffers()
846 * Migration function for pages with buffers. This function can only be used
895 * migration. Writeout may mean we loose the lock and the in writeout()
897 * At this point we know that the migration attempt cannot in writeout()
912 * Default handling if a filesystem does not provide a migration function.
918 /* Only writeback pages in full synchronous migration */ in fallback_migrate_page()
972 * for page migration. in move_to_new_page()
982 * isolation step. In that case, we shouldn't try migration. in move_to_new_page()
1061 * Only in the case of a full synchronous migration is it in __unmap_and_move()
1083 * of migration. File cache pages are no problem because of page_lock() in __unmap_and_move()
1084 * File Caches may use write_page() or lock_page() in migration, then, in __unmap_and_move()
1131 /* Establish migration ptes */ in __unmap_and_move()
1154 * If migration is successful, decrease refcount of the newpage in __unmap_and_move()
1229 * If migration is successful, releases reference grabbed during in unmap_and_move()
1265 * Counterpart of unmap_and_move_page() for hugepage migration.
1268 * because there is no race between I/O and migration for hugepage.
1276 * hugepage migration fails without data corruption.
1278 * There is also no race when direct I/O is issued on the page under migration,
1279 * because then pte is replaced with migration swap entry and direct I/O code
1280 * will wait in the page fault for migration to complete.
1295 * This check is necessary because some callers of hugepage migration in unmap_and_move_huge_page()
1298 * kicking migration. in unmap_and_move_huge_page()
1390 * If migration was not successful and there's a freeing callback, use in unmap_and_move_huge_page()
1404 * supplied as the target for the page migration
1408 * as the target of the page migration.
1409 * @put_new_page: The function used to free target pages if migration
1412 * @mode: The migration mode that specifies the constraints for
1413 * page migration, if any.
1414 * @reason: The reason for page migration.
1453 * during migration. in migrate_pages()
1471 * THP migration might be unsupported or the in migrate_pages()
1516 * removed from migration page list and not in migrate_pages()
1571 * clear __GFP_RECLAIM to make the migration callback in alloc_migration_target()
1778 /* The page is successfully queued for migration */ in do_pages_move()
1991 * Returns true if this is a safe migration target node for misplaced NUMA
2045 * migrate_misplaced_transhuge_page() skips page migration's usual in numamigrate_isolate_page()
2047 * has been isolated: a GUP pin, or any other pin, prevents migration. in numamigrate_isolate_page()
2063 * disappearing underneath us during migration. in numamigrate_isolate_page()
2161 /* Prepare a page as a migration target */ in migrate_misplaced_transhuge_page()
2426 * any kind of migration. Side effect is that it "freezes" the in migrate_vma_collect_pmd()
2439 * set up a special migration page table entry now. in migrate_vma_collect_pmd()
2447 /* Setup special migration page table entry */ in migrate_vma_collect_pmd()
2497 * @migrate: migrate struct containing all migration information
2529 * migrate_page_move_mapping(), except that here we allow migration of a
2553 * GUP will fail for those. Yet if there is a pending migration in migrate_vma_check_page()
2554 * a thread might try to wait on the pte migration entry and in migrate_vma_check_page()
2556 * differentiate a regular pin from migration wait. Hence to in migrate_vma_check_page()
2558 * infinite loop (one stoping migration because the other is in migrate_vma_check_page()
2559 * waiting on pte migration entry). We always return true here. in migrate_vma_check_page()
2579 * @migrate: migrate struct containing all migration information
2605 * a deadlock between 2 concurrent migration where each in migrate_vma_prepare()
2686 * migrate_vma_unmap() - replace page mapping with special migration pte entry
2687 * @migrate: migrate struct containing all migration information
2689 * Replace page mapping (CPU page table pte) with a special migration pte entry
2745 * @args: contains the vma, start, and pfns arrays for the migration
2766 * with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
2776 * properly set the destination entry like for regular migration. Note that
2778 * migration was successful for those entries after calling migrate_vma_pages()
2779 * just like for regular migration.
2991 * @migrate: migrate struct containing all migration information
2994 * struct page. This effectively finishes the migration from source page to the
3079 * @migrate: migrate struct containing all migration information
3081 * This replaces the special migration pte entry with either a mapping to the
3082 * new page if migration was successful for that page, or to the original page