亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Backend Development Python Tutorial Data Preprocessing Techniques for ML Models

Data Preprocessing Techniques for ML Models

Dec 03, 2024 am 10:39 AM

Data Preprocessing Techniques for ML Models

Data preprocessing is the act of carrying out certain actions or steps on a dataset before it is used for machine learning or other tasks. Data preprocessing involves cleaning, formatting or transforming data in order to improve its quality or ensure that it is suitable for its main purpose (in this case, training a model). A clean and high-quality dataset enhances the machine learning model's performance.

Common issues with low-quality data include:

  • Missing values
  • Inconsistent formats
  • Duplicate values
  • Irrelevant features

In this article, I will show you some of the common data preprocessing techniques to prepare datasets for use in training models. You will need basic knowledge of Python and how to use Python libraries and frameworks.

Requirements:
The following are required to be able to get the best out of this guide

  • Python 3.12
  • Jupyter Notebook or your favourite notebook
  • Numpy
  • Pandas
  • Scipy
  • Scikit learn
  • Melbourne Housing Dataset

You can also check out the output of each code in these Jupyter notebooks on Github.

Setup

If you haven't installed Python already, you can download it from the Python website and follow the instructions to install it.

Once Python has been installed, install the required libraries

pip install numpy scipy pandas scikit-learn

Install Jupyter Notebook.

pip install notebook

After installation, start Jupyter Notebook with the following command

jupyter notebook

This will launch Jupyter Notebook in your default web browser. If it doesn't, check the terminal for a link you can manually paste into your browser.

Open a new notebook from the File menu, import the required libraries and run the cell

import numpy as np
import pandas as pd
import scipy
import sklearn

Go to the Melbourne Housing Dataset site and download the dataset. Load the dataset into the notebook using the following code. You can copy the file path on your computer to paste in the read_csv function. You can also put the csv file in the same folder as the notebook and import the file as seen below.

data = pd.read_csv(r"melb_data.csv")

# View the first 5 columns of the dataset
data.head()

Split the data into training and validation sets

from sklearn.model_selection import train_test_split

# Set the target
y = data['Price']

# Firstly drop categorical data types
melb_features = data.drop(['Price'], axis=1) #drop the target column

X = melb_features.select_dtypes(exclude=['object'])

# Divide data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)

You have to split data into training and validation sets in order to prevent data leakage. As a result, whatever preprocessing technique you carry out on the training features set is the same as the one you carry out on the validation features set.

Now the dataset is ready to be processed!

Data Cleaning

Handling missing values
Missing values in a dataset are like holes in a fabric that are supposed to be used to sew a dress. It spoils the dress before it is even made.

There are 3 ways to handle missing values in a dataset.

  1. Drop the rows or columns with empty cells
pip install numpy scipy pandas scikit-learn

The issue with this method is that you may lose valuable information that you are to train your model with. Unless most values in the dropped rows or columns are missing, there is no need to drop either rows or columns with empty cells.

  1. Impute values in the empty cells You can impute or fill in the empty cells with the mean, median or mode of the data in that particular column. SimpleImputer from Scikit learn will be used to impute values in the empty cells
pip install notebook
  1. Impute and notify How this works is that you impute values in the empty cells but you also create a column that indicates that the cell was initially empty.
jupyter notebook

Duplicate removal
Duplicate cells mean repeated data and it affects model accuracy. The only way to deal with them is to drop them.

import numpy as np
import pandas as pd
import scipy
import sklearn

Dealing with outliers
Outliers are values that are significantly different from the other values in the dataset. They can be unusually high or low compared to other data values. They can arise due to entry errors or they could genuinely be outliers.

It is important to deal with outliers or else they will lead to inaccurate data analysis or models. One method to detect outliers is by calculating z-scores.

The way it works is that the z-score is used to check if a data point is 3 points or more away from the mean value. This calculation is done for every data point. If the z-score for a data point equals 3 or a higher value, the data point is an outlier.

data = pd.read_csv(r"melb_data.csv")

# View the first 5 columns of the dataset
data.head()

Data Transformation

Normalization
You normalize features so they can be described as a normal distribution.

A normal distribution (also known as the Gaussian distribution) is a statistical distribution where there are roughly equal distances or distributions above and below the mean. The graph of the data points of a normally distributed data form a bell curve.

The point of normalizing data is if the machine learning algorithm you want to use assumes that the data is normally distributed. An example is the Gaussian Naive Bayes model.

from sklearn.model_selection import train_test_split

# Set the target
y = data['Price']

# Firstly drop categorical data types
melb_features = data.drop(['Price'], axis=1) #drop the target column

X = melb_features.select_dtypes(exclude=['object'])

# Divide data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)

Standardization
Standardization transforms the features of a dataset to have a mean of 0 and a standard deviation of 1. This process scales each feature so that it has similar ranges across the data. This ensures that each feature contributes equally to model training.

You use standardization when:

  • The features in your data are on different scales or units.
  • The machine learning model you want to use is based on distance or gradient-based optimizations (e.g., linear regression, logistic regression, K-means clustering).

You use StandardScaler() from the sklearn library to standardize features.

pip install numpy scipy pandas scikit-learn

Conclusion

Data preprocessing is not just a preliminary stage. It is part of the process of building accurate machine learning models. It can also be tweaked to fit the needs of the dataset you are working with.

Like with most activities, practice makes perfect. As you continue to preprocess data, your skills will improve as well as your models.

I would love to read your thoughts on this ?

The above is the detailed content of Data Preprocessing Techniques for ML Models. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Polymorphism in python classes Polymorphism in python classes Jul 05, 2025 am 02:58 AM

Polymorphism is a core concept in Python object-oriented programming, referring to "one interface, multiple implementations", allowing for unified processing of different types of objects. 1. Polymorphism is implemented through method rewriting. Subclasses can redefine parent class methods. For example, the spoke() method of Animal class has different implementations in Dog and Cat subclasses. 2. The practical uses of polymorphism include simplifying the code structure and enhancing scalability, such as calling the draw() method uniformly in the graphical drawing program, or handling the common behavior of different characters in game development. 3. Python implementation polymorphism needs to satisfy: the parent class defines a method, and the child class overrides the method, but does not require inheritance of the same parent class. As long as the object implements the same method, this is called the "duck type". 4. Things to note include the maintenance

Python Function Arguments and Parameters Python Function Arguments and Parameters Jul 04, 2025 am 03:26 AM

Parameters are placeholders when defining a function, while arguments are specific values ??passed in when calling. 1. Position parameters need to be passed in order, and incorrect order will lead to errors in the result; 2. Keyword parameters are specified by parameter names, which can change the order and improve readability; 3. Default parameter values ??are assigned when defined to avoid duplicate code, but variable objects should be avoided as default values; 4. args and *kwargs can handle uncertain number of parameters and are suitable for general interfaces or decorators, but should be used with caution to maintain readability.

Explain Python generators and iterators. Explain Python generators and iterators. Jul 05, 2025 am 02:55 AM

Iterators are objects that implement __iter__() and __next__() methods. The generator is a simplified version of iterators, which automatically implement these methods through the yield keyword. 1. The iterator returns an element every time he calls next() and throws a StopIteration exception when there are no more elements. 2. The generator uses function definition to generate data on demand, saving memory and supporting infinite sequences. 3. Use iterators when processing existing sets, use a generator when dynamically generating big data or lazy evaluation, such as loading line by line when reading large files. Note: Iterable objects such as lists are not iterators. They need to be recreated after the iterator reaches its end, and the generator can only traverse it once.

Python `@classmethod` decorator explained Python `@classmethod` decorator explained Jul 04, 2025 am 03:26 AM

A class method is a method defined in Python through the @classmethod decorator. Its first parameter is the class itself (cls), which is used to access or modify the class state. It can be called through a class or instance, which affects the entire class rather than a specific instance; for example, in the Person class, the show_count() method counts the number of objects created; when defining a class method, you need to use the @classmethod decorator and name the first parameter cls, such as the change_var(new_value) method to modify class variables; the class method is different from the instance method (self parameter) and static method (no automatic parameters), and is suitable for factory methods, alternative constructors, and management of class variables. Common uses include:

How to handle API authentication in Python How to handle API authentication in Python Jul 13, 2025 am 02:22 AM

The key to dealing with API authentication is to understand and use the authentication method correctly. 1. APIKey is the simplest authentication method, usually placed in the request header or URL parameters; 2. BasicAuth uses username and password for Base64 encoding transmission, which is suitable for internal systems; 3. OAuth2 needs to obtain the token first through client_id and client_secret, and then bring the BearerToken in the request header; 4. In order to deal with the token expiration, the token management class can be encapsulated and automatically refreshed the token; in short, selecting the appropriate method according to the document and safely storing the key information is the key.

What are Python magic methods or dunder methods? What are Python magic methods or dunder methods? Jul 04, 2025 am 03:20 AM

Python's magicmethods (or dunder methods) are special methods used to define the behavior of objects, which start and end with a double underscore. 1. They enable objects to respond to built-in operations, such as addition, comparison, string representation, etc.; 2. Common use cases include object initialization and representation (__init__, __repr__, __str__), arithmetic operations (__add__, __sub__, __mul__) and comparison operations (__eq__, ___lt__); 3. When using it, make sure that their behavior meets expectations. For example, __repr__ should return expressions of refactorable objects, and arithmetic methods should return new instances; 4. Overuse or confusing things should be avoided.

How does Python memory management work? How does Python memory management work? Jul 04, 2025 am 03:26 AM

Pythonmanagesmemoryautomaticallyusingreferencecountingandagarbagecollector.Referencecountingtrackshowmanyvariablesrefertoanobject,andwhenthecountreacheszero,thememoryisfreed.However,itcannothandlecircularreferences,wheretwoobjectsrefertoeachotherbuta

Describe Python garbage collection in Python. Describe Python garbage collection in Python. Jul 03, 2025 am 02:07 AM

Python's garbage collection mechanism automatically manages memory through reference counting and periodic garbage collection. Its core method is reference counting, which immediately releases memory when the number of references of an object is zero; but it cannot handle circular references, so a garbage collection module (gc) is introduced to detect and clean the loop. Garbage collection is usually triggered when the reference count decreases during program operation, the allocation and release difference exceeds the threshold, or when gc.collect() is called manually. Users can turn off automatic recycling through gc.disable(), manually execute gc.collect(), and adjust thresholds to achieve control through gc.set_threshold(). Not all objects participate in loop recycling. If objects that do not contain references are processed by reference counting, it is built-in

See all articles