Skip to content

Commit 01a3b41

Browse files
authored
Fix documentation (#99)
1 parent 8a5e5c3 commit 01a3b41

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

examples/plot_knockoff_aggregation.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
Knockoff aggregation on simulated data
3-
=============================
3+
======================================
44
55
In this example, we show an example of variable selection using
66
model-X Knockoffs introduced by :footcite:t:`Candes_2018`. A notable

examples/plot_variable_importance_classif.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020

2121
#############################################################################
2222
# Imports needed
23-
# ------------------------------
23+
# --------------
2424

2525
import matplotlib.lines as mlines
2626
import matplotlib.pyplot as plt
@@ -37,7 +37,7 @@
3737

3838
#############################################################################
3939
# Generate the data
40-
# ------------------------------
40+
# -----------------
4141
# We generate the data using a multivariate normal distribution with a Toeplitz
4242
# correlation matrix. The target variable is generated using a non-linear function
4343
# of the features. To make the problem more intuitive, we generate a non-linear
@@ -81,7 +81,7 @@
8181

8282
#############################################################################
8383
# Visualize the data
84-
# ------------------------------
84+
# ------------------
8585

8686
fig, axes = plt.subplots(
8787
1,
@@ -115,7 +115,7 @@
115115

116116
#############################################################################
117117
# Variable importance inference
118-
# ------------------------------
118+
# -----------------------------
119119
# We use two different Support Vector Machine models, one with a linear kernel and
120120
# one with a polynomial kernel of degree 2, well specified to capture the non-linear
121121
# relationship between the features and the target variable. We then use the CPI and
@@ -208,7 +208,7 @@
208208

209209
#############################################################################
210210
# Compute the p-values for the variable importance
211-
# ------------------------------
211+
# ------------------------------------------------
212212

213213
pval_arr = np.zeros((n_features, 3))
214214
for j in range(n_features):
@@ -218,7 +218,7 @@
218218

219219
#############################################################################
220220
# Visualize the variable importance
221-
# ------------------------------
221+
# ---------------------------------
222222
# Here we plot the variable importance and highlight the features that are considered
223223
# important, with a p-value lower than 0.05, using a diamond marker. We also highlight
224224
# the true important features, used to generate the target variable, with a star marker.

0 commit comments

Comments
 (0)