Skip to content

Core API

Variable

python
from MiniTorch.core.variable import Variable

The fundamental tensor class with automatic differentiation support.

Constructor

python
Variable(data, requires_grad=True)
ParameterTypeDescription
datanp.ndarrayThe tensor data
requires_gradboolEnable gradient tracking (default True)

Attributes

AttributeTypeDescription
datanp.ndarrayRaw NumPy array
gradnp.ndarray | NoneAccumulated gradient
shapetupleShape of the underlying array
creatorFunction | NoneOp that created this variable

Methods

.backward(grad=None)

Triggers reverse-mode autodiff from this variable.

python
loss.backward()          # grad defaults to ones
loss.backward(np.ones((1,)))

.zero_grad()

Resets .grad to None.

Operator Overloads

Variable supports standard Python operators which dispatch to the corresponding Function:

OperatorFunction
a + bAdd
a - bSub
a * bMul
a / bDiv
a ** nPow
a @ bMatMul
-aNeg

Function

python
from MiniTorch.core.function import Function

Base class for all differentiable operations.

Implementing a Custom Op

python
class MyOp(Function):
    def forward(self, x: np.ndarray) -> np.ndarray:
        self.save_for_backward(x)
        return np.tanh(x)

    def backward(self, grad_output: np.ndarray):
        (x,) = self.saved_tensors
        return (1 - np.tanh(x) ** 2) * grad_output

save_for_backward(*tensors)

Stash NumPy arrays needed during the backward pass.

saved_tensors

Retrieve stashed arrays in .backward().


no_grad

python
from MiniTorch.core.config import no_grad

with no_grad():
    y = model(x)   # no graph is built

Context manager that disables gradient tracking globally.

Released under the MIT License.