


Strategies for efficiently searching specified subfolders in Python: application and optimization of os.scandir
Oct 12, 2025 am 09:48 AMPerformance bottlenecks of traditional directory scanning methods
In Python, a common way to enumerate directory contents is to use os.listdir(). However, this approach can suffer from significant performance issues when dealing with very large directories containing hundreds of thousands or more files and subfolders. The main reasons are:
- Two system call overhead : os.listdir() will first obtain the name list of all files and folders under the specified path. Next, to determine whether each entry is a directory (for example, using os.path.isdir()), the program needs to make a separate system call for each name in the list to obtain its metadata. This means that for N entries, N additional system calls to os.path.isdir() will be generated, resulting in a large number of I/O operations and time consumption.
- Memory usage : os.listdir() loads all entry names in a directory into memory at once, which may result in significant memory usage for directories containing a large number of entries.
- Regular expression matching : After obtaining all entries, filter through regular expressions. Although powerful, for massive data, each match will still increase the computational burden.
The following is a typical legacy implementation example that can cause performance issues:
import os import re def find_subfolders_inefficient(dir_of_interest, starting_string_of_interest): # 1. Get all file and folder names all_entries = os.listdir(dir_of_interest) # 2. Filter out all subfolders (each os.path.isdir() is a system call) all_subfolders = [ item for item in all_entries if os.path.isdir(os.path.join(dir_of_interest, item)) ] # 3. Use regular expressions to match regexp_pattern = re.compile(starting_string_of_interest) all_subfolders_of_interest = list(filter(regexp_pattern.match, all_subfolders)) return all_subfolders_of_interest # Example call # subfolders = find_subfolders_inefficient('path/to/large/folder', 'prefix_')
os.scandir: Efficient directory iterator
In order to solve the above performance bottleneck, Python 3.5 introduced the os.scandir() function. os.scandir() provides a more efficient directory iterator. Its core advantages are:
- Reduce system calls : os.scandir() returns an iterator, and each iteration generates an os.DirEntry object. This DirEntry object caches the file type and statistics (such as whether it is a directory, file, etc.) when it is created, so there is no need to call os.path.isdir() or os.path.isfile() to obtain this information. This greatly reduces the number of queries to the file system.
- Iterator pattern : os.scandir() does not load all entries into memory at once, but generates DirEntry objects one by one on demand. This makes it extremely memory efficient when handling very large directories.
- Direct access to attributes : The DirEntry object provides methods and attributes such as name (file name/folder name), path (full path), is_dir(), is_file(), etc., which can be directly used to determine and obtain information.
Optimized implementation: use os.scandir to find subfolders
Using os.scandir() to optimize the logic of finding the specified subfolder can significantly improve performance. The following is an optimized implementation based on os.scandir():
import os def find_subfolders_efficient(dir_of_interest, starting_string_of_interest): """ Use os.scandir to efficiently find subfolders starting with a specific string in a specified directory. Args: dir_of_interest (str): Directory path to be scanned. starting_string_of_interest (str): The starting matching string of the subfolder name. Returns: list: List of matching subfolder names. """ all_subfolders_of_interest = [] try: # Iterate over directory entries with os.scandir(dir_of_interest) as entries: for entry in entries: # Check whether it is a directory and the name matches the prefix # entry.is_dir() avoids additional system calls # entry.name gets the name directly and avoids path splicing if entry.is_dir() and entry.name.startswith(starting_string_of_interest): all_subfolders_of_interest.append(entry.name) except FileNotFoundError: print(f"Error: Directory '{dir_of_interest}' does not exist.") exceptPermissionError: print(f"Error: No permission to access directory '{dir_of_interest}'.") except Exception as e: print(f"An unknown error occurred while scanning the directory: {e}") return all_subfolders_of_interest # Example call if __name__ == '__main__': # Create a test directory structure (optional) # os.makedirs('test_large_folder/prefix_sub1', exist_ok=True) # os.makedirs('test_large_folder/another_sub', exist_ok=True) # os.makedirs('test_large_folder/prefix_sub2', exist_ok=True) # with open('test_large_folder/file.txt', 'w') as f: # f.write("test") target_dir = 'test_large_folder' # Replace with your actual directory search_prefix = 'prefix_' print(f"Searching for subfolders starting with '{search_prefix}' in {target_dir}...") found_subfolders = find_subfolders_efficient(target_dir, search_prefix) if found_subfolders: print("Following subfolders found:") for folder in found_subfolders: print(f"- {folder}") else: print("No matching subfolder found.")
In the above code, when we directly iterate the DirEntry object returned by os.scandir, we use the entry.is_dir() method to determine whether it is a directory, and use entry.name.startswith() for name matching. This approach combines file type determination and name filtering into a single loop, avoiding multiple list creations and additional system calls, resulting in significant performance improvements.
Things to note and best practices
- Error handling : In actual applications, abnormal situations such as file or directory non-existence and insufficient permissions should always be considered and appropriate error handling should be performed, such as the try-except block in the sample code.
- Resource management : The iterator returned by os.scandir() is a file system resource. It is recommended to use the with statement to ensure that the iterator is closed correctly after use and resources can be released even if an exception occurs.
- Cross-platform compatibility : os.scandir() is cross-platform and works correctly on Windows, Linux and macOS.
- Combining with pathlib : For more modern Python file system operations, consider combining with the pathlib module. The pathlib.Path object also provides the iterdir() method, and its underlying layer is usually implemented based on os.scandir, providing a more object-oriented API.
Summarize
os.scandir() is an indispensable optimization tool when dealing with large-scale directory scanning tasks in Python. It significantly improves the performance and memory efficiency of file system operations by providing efficient directory iterators, caching file type information, and avoiding unnecessary system calls. Migrating from the combination of os.listdir and os.path.isdir to os.scandir is a key step in optimizing Python file system interaction, especially for scenarios where specific files or directories need to be quickly retrieved.
The above is the detailed content of Strategies for efficiently searching specified subfolders in Python: application and optimization of os.scandir. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

ArtGPT
AI image generator for creative art from text prompts.

Stock Market GPT
AI powered investment research for smarter decisions

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

This tutorial details how to efficiently merge the PEFT LoRA adapter with the base model to generate a completely independent model. The article points out that it is wrong to directly use transformers.AutoModel to load the adapter and manually merge the weights, and provides the correct process to use the merge_and_unload method in the peft library. In addition, the tutorial also emphasizes the importance of dealing with word segmenters and discusses PEFT version compatibility issues and solutions.

Run pipinstall-rrequirements.txt to install the dependency package. It is recommended to create and activate the virtual environment first to avoid conflicts, ensure that the file path is correct and that the pip has been updated, and use options such as --no-deps or --user to adjust the installation behavior if necessary.

Python is a simple and powerful testing tool in Python. After installation, test files are automatically discovered according to naming rules. Write a function starting with test_ for assertion testing, use @pytest.fixture to create reusable test data, verify exceptions through pytest.raises, supports running specified tests and multiple command line options, and improves testing efficiency.

Theargparsemoduleistherecommendedwaytohandlecommand-lineargumentsinPython,providingrobustparsing,typevalidation,helpmessages,anderrorhandling;usesys.argvforsimplecasesrequiringminimalsetup.

This article aims to explore the common problem of insufficient calculation accuracy of floating point numbers in Python and NumPy, and explains that its root cause lies in the representation limitation of standard 64-bit floating point numbers. For computing scenarios that require higher accuracy, the article will introduce and compare the usage methods, features and applicable scenarios of high-precision mathematical libraries such as mpmath, SymPy and gmpy to help readers choose the right tools to solve complex accuracy needs.

This article details how to use the merge_and_unload function of the PEFT library to efficiently and accurately merge the LoRA adapter into the basic large language model, thereby creating a brand new model with integrated fine-tuning knowledge. The article corrects common misunderstandings about loading adapters and manually merging model weights through transformers.AutoModel, and provides complete code examples including model merging, word segmenter processing, and professional guidance on solving potential version compatibility issues to ensure smooth merge processes.

PyPDF2, pdfplumber and FPDF are the core libraries for Python to process PDF. Use PyPDF2 to perform text extraction, merging, splitting and encryption, such as reading the page through PdfReader and calling extract_text() to get content; pdfplumber is more suitable for retaining layout text extraction and table recognition, and supports extract_tables() to accurately capture table data; FPDF (recommended fpdf2) is used to generate PDF, and documents are built and output through add_page(), set_font() and cell(). When merging PDFs, PdfWriter's append() method can integrate multiple files

Import@contextmanagerfromcontextlibanddefineageneratorfunctionthatyieldsexactlyonce,wherecodebeforeyieldactsasenterandcodeafteryield(preferablyinfinally)actsas__exit__.2.Usethefunctioninawithstatement,wheretheyieldedvalueisaccessibleviaas,andthesetup
