在資訊工程中,樹是一種由節點及邊構成的資料結構,並且會從一個根節點開始,長出數個以邊連接的其他節點作為分支;這些其他節點也可以再分支出節點,而不再長出分支時的節點則稱為葉節點。示意圖如下:

而在機器學習中,如果我們在非葉節點上執行「判斷某個特徵是否符合某個條件」的動作,並在符合條件時往某個分支走去,不符合時往另一個分支走去,直到抵達葉節點為止,則這樣的一棵樹,就可以是一個模型。基於樹的模型有相當多種,複雜的模型甚至需要組合多棵樹的結果,例如 random forest、XGBoostLightGBM、...等等,本篇在使用上,將介紹 XGBoost。

XGBoost 在分類或回歸問題都可以使用。以下範例先以分類問題作為示範:

import numpy as np
from sklearn.datasets import make_moons
from xgboost import XGBClassifier

x_train, y_train = make_moons(n_samples=800, shuffle=True, noise=0.1)
x_test, y_test = make_moons(n_samples=200, shuffle=True, noise=0.1)

param = {
	'n_estimators': 5,
	'max_depth': 3,
	'learning_rate': 0.01,
}

model = XGBClassifier(**param)
model.fit(x_train, y_train)
pred = model.predict(x_test)
print(f'Accuracy: {np.mean(pred == y_test)*100:.2f}%')

在上述範例中:

如果你想要知道在訓練過程中,loss 變化的狀況,可以在 fit 的時候,將相關資料集帶入,如下:

import numpy as np
from sklearn.datasets import make_moons
from xgboost import XGBClassifier

x_train, y_train = make_moons(n_samples=800, shuffle=True, noise=0.1)
x_val, y_val = make_moons(n_samples=200, shuffle=True, noise=0.1)
x_test, y_test = make_moons(n_samples=200, shuffle=True, noise=0.1)

param = {
	'n_estimators': 5,
	'max_depth': 3,
	'learning_rate': 0.01,
}

model = XGBClassifier(**param)
model.fit(
	x_train,
	y_train,
	eval_set=[
		(x_val, y_val),
		(x_train, y_train),
	],
)
pred = model.predict(x_test)
print(f'Accuracy: {np.mean(pred == y_test)*100:.2f}%')

Scikit-learn 的分類器通常不允許你自己設計 loss function,但是 XGBoost 可以。以下用回歸問題示範如何撰寫:

import numpy as np
from sklearn.datasets import make_regression
from xgboost import XGBRegressor

x_train, y_train = make_regression(n_samples=1600, shuffle=True, noise=0.01)
x_val, y_val = make_regression(n_samples=400, shuffle=True, noise=0.01)
x_test, y_test = make_regression(n_samples=400, shuffle=True, noise=0.01)

param = {
	'n_estimators': 50,
	'max_depth': 5,
	'learning_rate': 0.01,
}

model = XGBRegressor(**param)
model.fit(
	x_train,
	y_train,
	eval_set=[
		(x_val, y_val),
		(x_train, y_train),
	],
)
pred = model.predict(x_test)
print(f'RMSE: {np.mean((pred - y_test) ** 2) ** 0.5:.4f}')


def my_loss(label, pred):
	"""
	loss = (label - pred) ** 2
	"""
	grad = -2 * (label - pred)
	hess = 2 * np.ones_like(pred)
	return grad, hess


param['objective'] = my_loss
model = XGBRegressor(**param)
model.fit(
	x_train,
	y_train,
	eval_set=[
		(x_val, y_val),
		(x_train, y_train),
	],
)
pred = model.predict(x_test)
print(f'RMSE: {np.mean((pred - y_test) ** 2) ** 0.5:.4f}')

在上述範例中: