Naive Bayes

Created
TagsBasic Concepts

Q7: Why is “Naive” Bayes naive?

Answer: Despite its practical applications, especially in text mining, Naive Bayes is considered “Naive” because it makes an assumption that is virtually impossible to see in real-life data: the conditional probability is calculated as the pure product of the individual probabilities of components. This implies the absolute independence of features — a condition probably never met in real life.

As a Quora commenter put it whimsically, a Naive Bayes classifier that figured out that you liked pickles and ice cream would probably naively recommend you a pickle ice cream.

Naive Bayes Overview

Naive Bayes is a simple yet powerful algorithm for predictive modeling and machine learning. It is based on Bayes' Theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Despite these simplifications, Naive Bayes classifiers work well in many real-world situations, famously document classification and spam filtering.

How Naive Bayes Works

Naive Bayes classifiers work by correlating the presence (or absence) of features with the labels of training data, applying Bayes' Theorem to calculate probabilities and make predictions. The 'naive' aspect of the algorithm comes from assuming that the features are independent of each other, simplifying the computation of probabilities.

Given a class variable \(y\) and a dependent feature vector \(x_1\) through \(x_n\), Bayes' theorem states the following relationship:

P(yx1,,xn)=P(x1,,xny)P(y)P(x1,,xn) P(y|x_1, \ldots, x_n) = \frac{P(x_1, \ldots, x_n|y) \cdot P(y)}{P(x_1, \ldots, x_n)} 

For prediction, we're interested in finding the class \(y\) with the highest posterior probability, given the features \(x_1\) to \(x_n\).

Types of Naive Bayes Models

Advantages and Disadvantages

Advantages:

Disadvantages:

Example: Spam Detection with Multinomial Naive Bayes

from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score
import seaborn as sns
import matplotlib.pyplot as plt

# Load dataset
data = fetch_20newsgroups(categories=['talk.religion.misc', 'soc.religion.christian', 'sci.space'])
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.25, random_state=0)

# Create a model
model = make_pipeline(CountVectorizer(), MultinomialNB())
# Create a Model Pipeline: The pipeline created with make_pipeline includes two steps: CountVectorizer and MultinomialNB. CountVectorizer converts the text documents into a matrix of token counts, which is a necessary step for feature extraction in text classification tasks. MultinomialNB applies the Multinomial Naive Bayes algorithm for classification based on these features.
# Train the model
model.fit(X_train, y_train)
# This process involves learning how the frequency of words in the documents correlates with the document categories.

# Predictions
predicted = model.predict(X_test)

# Accuracy
print("Accuracy:", accuracy_score(y_test, predicted))

# Confusion matrix
mat = confusion_matrix(y_test, predicted)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
            xticklabels=data.target_names, yticklabels=data.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label')
plt.show()

This example demonstrates how to use Multinomial Naive Bayes for classifying documents into different categories. The process involves converting text documents into feature vectors (word counts in this case) and then applying the Naive Bayes classifier. Despite its simplicity, Naive Bayes can achieve high accuracy in text classification tasks like spam detection.