Skip to contents

This implements the tuning procedure for SDR and classification problems in the forth coming paper Quach and Li (2021).

Usage

kfold_km_tuning(
  h_list,
  k,
  x_datta,
  y_datta,
  d,
  ytype,
  class_labels,
  n_cpc,
  method = "newton",
  std = "none",
  parallelize = F,
  control_list = list(),
  iter.max = 100,
  nstart = 100
)

Arguments

h_list
k
x_datta
y_datta
d

specified the reduced dimension

ytype

specify the response as 'continuous', 'multinomial', or 'ordinal'

class_labels
n_cpc
method

"newton" or "cg" methods; for carrying out the optimization using the standard newton-raphson (i.e. Fisher Scoring) or using Congugate Gradients

parallelize

Default is False; to run in parallel, you will need to have foreach and some parallel backend loaded; parallelization is strongly recommended and encouraged.

control_list

a list of control parameters for the Newton-Raphson or Conjugate Gradient methods

iter.max
nstart

Value

A list containing both the estimate and candidate matrix.

  • opcg - A 'pxd' matrix that estimates a basis for the central subspace.

  • opcg_wls - A 'pxd' matrix that estimates a basis for the central subspace based on the initial value of the optimization problem; useful for examining bad starting values.

  • cand_mat - A list that contains both the candidate matrix for OPCG and for the initial value; this is used in other functions for order determination

  • gradients - The estimated local gradients; used in regularization of OPCG

  • weights - The kernel weights in the local-linear GLM.

Details

The kernel for the local linear regression is fixed at a gaussian kernel.

For large 'p', we strongly recommend using the Conjugate Gradients implement, by setting method="cg". For method="cg", the hybrid conjugate gradient of Dai and Yuan is implemented, but only the armijo rule is implemented through backtracking, like in Bertsekas' "Convex Optimization Algorithms". A weak Wolfe condition can also be enforced by adding setting c_wolfe > 0 in the control_list, but since c_wolfe is usually set to 0.1 (Wikipedia) and this drastically slows down the algorithm relative to newton for small to moderate p, we leave the default as not enforcing a Wolfe condition, since we assume that our link function gives us a close enough initial point that local convergence is satisfactory. Should the initial values be suspect, then maybe enforcing the Wolfe condition is a reasonable trade-off.