Lines Matching full:migration
3 * Memory Migration functionality - linux/mm/migrate.c
7 * Page migration was first developed in the context of the memory hotplug
8 * project. The main authors of the migration code are:
111 * compaction threads can race against page migration functions in isolate_movable_page()
115 * being (wrongly) re-isolated while it is under migration, in isolate_movable_page()
163 * from where they were once taken off for compaction/migration.
203 * Restore a potential migration pte to a working pte entry
227 /* PMD-mapped THP migration entry */ in remove_migration_pte()
241 * Recheck VMA as permissions can change since migration started in remove_migration_pte()
291 * Get rid of all migration entries and replace them by
308 * Something used the pte of a page under migration. We need to
309 * get to the page and wait until migration is finished.
332 * Once page cache replacement of page migration started, page_count in __migration_entry_wait()
697 * Migration functions
757 * async migration. Release the taken locks in buffer_migrate_lock_buffers()
853 * Migration function for pages with buffers. This function can only be used
902 * migration. Writeout may mean we loose the lock and the in writeout()
904 * At this point we know that the migration attempt cannot in writeout()
919 * Default handling if a filesystem does not provide a migration function.
925 /* Only writeback pages in full synchronous migration */ in fallback_migrate_page()
979 * for page migration. in move_to_new_page()
989 * isolation step. In that case, we shouldn't try migration. in move_to_new_page()
1071 * Only in the case of a full synchronous migration is it in __unmap_and_move()
1093 * of migration. File cache pages are no problem because of page_lock() in __unmap_and_move()
1094 * File Caches may use write_page() or lock_page() in migration, then, in __unmap_and_move()
1141 /* Establish migration ptes */ in __unmap_and_move()
1164 * If migration is successful, decrease refcount of the newpage in __unmap_and_move()
1239 * If migration is successful, releases reference grabbed during in unmap_and_move()
1275 * Counterpart of unmap_and_move_page() for hugepage migration.
1278 * because there is no race between I/O and migration for hugepage.
1286 * hugepage migration fails without data corruption.
1288 * There is also no race when direct I/O is issued on the page under migration,
1289 * because then pte is replaced with migration swap entry and direct I/O code
1290 * will wait in the page fault for migration to complete.
1305 * This check is necessary because some callers of hugepage migration in unmap_and_move_huge_page()
1308 * kicking migration. in unmap_and_move_huge_page()
1400 * If migration was not successful and there's a freeing callback, use in unmap_and_move_huge_page()
1414 * supplied as the target for the page migration
1418 * as the target of the page migration.
1419 * @put_new_page: The function used to free target pages if migration
1422 * @mode: The migration mode that specifies the constraints for
1423 * page migration, if any.
1424 * @reason: The reason for page migration.
1463 * during migration. in migrate_pages()
1481 * THP migration might be unsupported or the in migrate_pages()
1526 * removed from migration page list and not in migrate_pages()
1581 * clear __GFP_RECLAIM to make the migration callback in alloc_migration_target()
1789 /* The page is successfully queued for migration */ in do_pages_move()
2002 * Returns true if this is a safe migration target node for misplaced NUMA
2056 * migrate_misplaced_transhuge_page() skips page migration's usual in numamigrate_isolate_page()
2058 * has been isolated: a GUP pin, or any other pin, prevents migration. in numamigrate_isolate_page()
2074 * disappearing underneath us during migration. in numamigrate_isolate_page()
2172 /* Prepare a page as a migration target */ in migrate_misplaced_transhuge_page()
2437 * any kind of migration. Side effect is that it "freezes" the in migrate_vma_collect_pmd()
2450 * set up a special migration page table entry now. in migrate_vma_collect_pmd()
2458 /* Setup special migration page table entry */ in migrate_vma_collect_pmd()
2509 * @migrate: migrate struct containing all migration information
2541 * migrate_page_move_mapping(), except that here we allow migration of a
2565 * GUP will fail for those. Yet if there is a pending migration in migrate_vma_check_page()
2566 * a thread might try to wait on the pte migration entry and in migrate_vma_check_page()
2568 * differentiate a regular pin from migration wait. Hence to in migrate_vma_check_page()
2570 * infinite loop (one stoping migration because the other is in migrate_vma_check_page()
2571 * waiting on pte migration entry). We always return true here. in migrate_vma_check_page()
2591 * @migrate: migrate struct containing all migration information
2617 * a deadlock between 2 concurrent migration where each in migrate_vma_prepare()
2698 * migrate_vma_unmap() - replace page mapping with special migration pte entry
2699 * @migrate: migrate struct containing all migration information
2701 * Replace page mapping (CPU page table pte) with a special migration pte entry
2757 * @args: contains the vma, start, and pfns arrays for the migration
2778 * with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
2788 * properly set the destination entry like for regular migration. Note that
2790 * migration was successful for those entries after calling migrate_vma_pages()
2791 * just like for regular migration.
3003 * @migrate: migrate struct containing all migration information
3006 * struct page. This effectively finishes the migration from source page to the
3091 * @migrate: migrate struct containing all migration information
3093 * This replaces the special migration pte entry with either a mapping to the
3094 * new page if migration was successful for that page, or to the original page