API¶
Variables and derivatives¶
- class src.autodiff_107.diff.Node(value, fm_seed=0)¶
This is the main object for variables used for Automatic Differentiation. Operations for the functions the user wants to use will have to be done using this object. Constants involved in the function can be inserted normally and do not need to be wrapped in Node objects.
- Parameters
value (int, float) – Value of the variable
fm_seed (int, float) – Seed to use in forward mode. Most likely this will be 1 for one input and 0 for all others, but specific use cases may warrant different weights. Default: 0
- Variables
_value – Value at which the variable is evaluated
_d – Value of the derivative of this Node with respect to each parent at the point where this Node is evaluated.
_fmd – Forward mode derivative. At any point after the forward pass, this should store the derivative of any node with respect to some input as defined by the fm_seed. This could actually be the sum of derivatives of the node with respect to some weighting of inputs. If that is your use case, you likely know what you are doing.
- class src.autodiff_107.diff.Var(*args, **kwargs)¶
- property value¶
This holds the value of this variable
- src.autodiff_107.diff.derivative(f, X)¶
This function will return the derivative of f with respect to X. f and X must both be one dimensional but their lengths need not match. If f is a vector, the jacobian will be returned
- src.autodiff_107.diff.get_fm_derivative(Y)¶
If a forward mode seed was set using set_fm_seed, this function will return the derivative calculated using forward mode with the set seed.
- Parameters
Y (array(Node)) – input
- Returns
forward mode derivative
- Return type
array
- src.autodiff_107.diff.get_fm_seed(X)¶
This function will return the current seed of X to be used in forward mode
- src.autodiff_107.diff.set_fm_seed(X, seed)¶
This function sets the forward mode seed for the variable X to be used in the forward pass in any following calculations. X and seed should have matching shapes.
- src.autodiff_107.diff.value(x)¶
This function takes in a numpy array returned by variable() or the result of a calculation using a numpy array from variable() and returns a numpy array of the numeric values of x
Optimizers¶
- src.autodiff_107.optim.adagrad(f, x0, lr=0.1, lr_decay=0.0, weight_decay=0.0, eps=1e-10, max_iter=1000)¶
Adagrad algorithm: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.
- Parameters
f (function (callable)) – objective function to minimize
x0 (list or np.array w/ dtype float or int) – initial parameters
lr (float, optional) – learning rate, defaults to 0.1
lr_decay (float, optional) – learning rate decay, defaults to 0.
weight_decay (float, optional) – weight decay (L2 penalty), defaults to 0.
eps (float, optional) – term added to the denominator to improve numerical stability, defaults to 1e-10
max_iter (int, optional) – maximum number of iterations, defaults to 1000
- Returns
function minimum after max_iter steps
- Return type
np.array
- src.autodiff_107.optim.gradient_descent(f, x0, lr=0.1, max_iter=1000)¶
Simple gradient descent
- Parameters
f (function (callable)) – function to minimize
x0 (list or np.array w/ dtype float or int) – initial starting point for gradient descent, if x0 is too far from the minimum, algorithm might not converge
lr (float) – learning rate, default to 0.1
max_iter (int) – number of iterations to run gradient descent, default to 1000
- Returns
function minimum after max_iter steps
- Return type
np.array
Root finding¶
- src.autodiff_107.rootfind.newton(f, x0, tol=1e-05, maxiter=50)¶
Newton’s root finding algorithm for single variable functions
- Parameters
f (function (callable)) – function of a single variable x
x0 (float or int) – initial starting point estimate
tol (float, optional) – root tolerance, defaults to 1e-5
maxiter (int, optional) – maximium number of iterations, defaults to 50
- Returns
root of f
- Return type
float
Visulization¶
- src.autodiff_107.viz.draw_graph(nodes, draw_=True)¶
Draw directed computational graph with colors, independant variables are in red, dependant variables are in blue, final node is in orange.
- Parameters
nodes (array(Node)) – array of nodes from which to draw the graph
draw (bool, optional) – draw the graph, defaults to True
- src.autodiff_107.viz.draw_graph_expensive(nodes, draw_=True)¶
Directed computatinal graph that specifies explicitly the layout of the nodes for clarity. All the independent variables, in red, have the same x-coordinate. All the output values, in orange, have the same x-coordinate. All dependent variables, in blue, have x-coordinates comprised between the x-coordinate of the independent variables and the x-coordinate of the outputs.
- Parameters
nodes (array(Node)) – array of nodes from which to draw the graph
draw (bool, optional) – draw the graph, defaults to True
- src.autodiff_107.viz.draw_graph_without_edge_labels(nodes, draw_=True)¶
Directed graph where nodes are variables (independent and intermediate, plus the output). Edges between nodes represent parent-child relations. This is a light version, that is why it’s so simple.
- Parameters
nodes (array(Node)) – array of nodes from which to draw the graph
draw (bool, optional) – draw the graph, defaults to True