Support Vector Machine Example - Visualize in Three Dimensions
Support vector machines (SVMs), one of the most popular algorithms for data classification, are powerful on three-dimensional data as well as two-dimensional. In this Support Vector Machine examples post, we'll use the Pythonand scikit-learn, matplotlib library to use the Three-dimensional support vector machine (SVM)and how to visualize the results.
Classification boundaries in three-dimensional space can be more complex, but visualization can make it easier to understand. Let's take a look at this example to see how SVMs handle three-dimensional data.

Understanding three-dimensional support vector machines (SVMs)
SVMis an algorithm that finds the optimal boundary (hyperplane) that separates data into two classes. In three dimensions, this hyperplane appears as a plane, dividing the data into different classes. In this example, we'll visualize a 3D classification boundary using data with three features.
Python code step-by-step
1. import the required libraries
First, we load the necessary libraries to create the three-dimensional data and implement the SVM.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # Module for 3D visualization
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.svm import SVCmpl_toolkits.mplot3d.Axes3DModules required to draw 3D graphs.SVC:scikit-learnSupport Vector Machine (SVM) classifier provided by Hadoop, Inc.
2. Create sample data
Next, generate three-dimensional data to train the SVM.
# Generate 3D data with 3 features
X, y = datasets.make_classification(n_samples=100, n_features=3, n_informative=3, n_redundant=0, random_state=42)
# Separate data for training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)n_samples=100: Generate 100 samples.n_features=3: Set each sample to have 3 features.n_informative=3: All three features are important information for classification.
3. Create and train an SVM model
Train an SVM model using the data you created.
Create an SVM model for # 3D data
model = SVC(kernel='linear')
Train the # model
model.fit(X_train, y_train)kernel='linear': Find classification boundaries between data using a linear kernel.
4. Visualize SVM 3D results
You can now save the results of the trained SVM model to 3D visualizationIn three dimensions, the taxonomy boundary appears as a plane.
# Function to visualize the 3D classification boundary of a trained SVM model
def plot_3d_decision_boundary(X, y, model):
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
Plot the # data points in 3D space
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap='coolwarm', s=60, edgecolors='k')
Set up the grid for visualizing the # boundary
xlim = (X[:, 0].min(), X[:, 0].max())
ylim = (X[:, 1].min(), X[:, 1].max())
zlim = (X[:, 2].min(), X[:, 2].max())
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 30),Full integration code
Below is the complete code for a three-dimensional support vector machine (SVM) model, complete with comments.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # Module for plotting 3D graphs
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
# 1. Generate 3D data
# Generate 100 samples with 3 features.
# n_informative=3: use only 3 significant features, n_redundant=0: no redundant features
X, y = datasets.make_classification(n_samples=100, n_features=3, n_informative=3,
n_redundant=0, random_state=42)
# 2. Separate dataset into training and testing
# Use 701 TP3T of data for training and 301 TP3T of data for testing.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# 3. Generate SVM model
Create an SVM model using a # linear kernel.
model = SVC(kernel='linear')
# 4. Train the model
Train the SVM model using the # training data.
model.fit(X_train, y_train)
# 5. Define a 3D visualization function
def plot_3d_decision_boundary(X, y, model):
Generate a plot for a # 3D graph
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
# 5.1 Visualizing data points
# Display the data in 3D space, color-coded according to each class.
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap='coolwarm', s=60, edgecolors='k')
# 5.2 Setting up a grid for boundary visualization
xlim = (X[:, 0].min(), X[:, 0].max())
ylim = (X[:, 1].min(), X[:, 1].max())
zlim = (X[:, 2].min(), X[:, 2].max())
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 30), np.linspace(ylim[0], ylim[1], 30))
# 5.3 Calculating the Decision Boundary Plane
Define the bounding plane using the weights and intercept of the # linear SVM.
The coef_ of the # model represents the slope of the crystal boundary plane.
Z = (-model.coef_[0][0] * xx - model.coef_[0][1] * yy - model.intercept_) / model.coef_[0][2]
# 5.4 Visualize the crystal boundary plane
ax.plot_surface(xx, yy, Z, color='green', alpha=0.3)
# 5.5 Set axis labels
ax.set_xlabel("Feature 1")
ax.set_ylabel("Feature 2")
ax.set_zlabel("Feature 3")
ax.set_title("SVM 3D Decision Boundary")
# 5.6 Display the graph
plt.show()
# 6. Visualize the SVM decision boundary using test data
plot_3d_decision_boundary(X_test, y_test, model)Code description:
- Generate data:
make_classificationfunction to generate three-dimensional data so that it can be trained with an SVM.n_features=3to create three-dimensional data. - Partitioning data:
train_test_splitto separate data for training and testing. - Create and train SVM models:
SVCclass to create an SVM model with a linear kernel, and train the model on training data. - 3D visualization:
mpl_toolkits.mplot3d.Axes3Dmodule to implement 3D visualization and visualize the classification boundary plane trained by the SVM.ax.plot_surfacefunction to visualize the decision boundary plane of an SVM.
Frequently asked questions (FAQ)
Q1. What are the decision boundaries for SVMs on three-dimensional data?
A1. In three-dimensional data, the decision boundary of an SVM appears as a plane. This plane separates the two classes, and the SVM learns by maximizing the distance of this plane from the nearest data points (support vectors).
Q2. What are kernel functions?
A2. Kernel functions are functions that help SVMs transform data into a higher-dimensional space so that they can linearly separate nonlinear data. In this example, we will use a linear kernel (linear), but there are many other kernels out there, including RBF kernels, polynomial kernels, and more.
Q3. coef_and intercept_is?
A3. coef_is a value that represents the slope of the decision boundary in the SVM, intercept_represents the intercept of the decision boundary. These two values allow you to define the classification boundary (plane).
Q4. Can I visualize higher dimensional data besides three-dimensional data?
A4. You can visualize up to three dimensions, but not more than four. However, SVMs can be powerful on high-dimensional data. When dealing with high-dimensional data, dimensionality reduction techniques (such as PCA) can be used to visualize it.
Q5. Can SVMs be used to classify non-linear data?
A5. Yes, SVMs can be applied using nonlinear kernels, such as RBF kernels or polynomial kernels, to classify nonlinear data. Kernel methods can be used to effectively classify even complex data distributions.
Finalize
In this post, we learned how to implement a three-dimensional support vector machine (SVM) using Python and visualize the results. The visualized results gave us an intuitive understanding of how SVMs set classification boundaries and learn from three-dimensional data.
Apply SVMs to real-world data and explore their performance further by using different kernel functions or high-dimensional data.




