FlexNN 1
Fully connected neural network built from scratch with flexible n-layer design and multiple activations.
Loading...
Searching...
No Matches
FlexNN::Layer Class Reference

Represents a single layer in a neural network. More...

#include <Layer.h>

Public Member Functions

 Layer (int inputSize, int outputSize, const std::string &activationFunction="relu")
 Constructor for the Layer class.
 
Eigen::MatrixXd getWeights () const
 Getters for weights.
 
Eigen::VectorXd getBiases () const
 Getters for biases.
 
void updateWeights (const Eigen::MatrixXd &dW, const Eigen::VectorXd &db, double learningRate)
 Update weights and biases.
 
std::pair< Eigen::MatrixXd, Eigen::MatrixXd > forward (const Eigen::MatrixXd &input)
 Forward pass through the layer.
 
Eigen::MatrixXd backward (const Eigen::MatrixXd &nextW, const Eigen::MatrixXd &nextdZ, const Eigen::MatrixXd &currZ)
 Backward pass through the layer.
 

Detailed Description

Represents a single layer in a neural network.

The Layer class encapsulates the properties and methods required for a neural network layer, including weights, biases, forward and backward passes, and weight updates.

This class is designed to be flexible and can be used with different activation functions. It supports both relu and softmax activation functions by default, but can be extended to include others.

Constructor & Destructor Documentation

◆ Layer()

FlexNN::Layer::Layer ( int  inputSize,
int  outputSize,
const std::string &  activationFunction = "relu" 
)
inline

Constructor for the Layer class.

Initializes the layer with random weights and biases.

Parameters
inputSizeThe size of the input to this layer.
outputSizeThe size of the output from this layer (also the number of neurons of this layer).
activationFunctionThe activation function to be used in this layer (default is "relu").
Note
If this is the last layer, the activation function should be "softmax".

Member Function Documentation

◆ backward()

Eigen::MatrixXd FlexNN::Layer::backward ( const Eigen::MatrixXd &  nextW,
const Eigen::MatrixXd &  nextdZ,
const Eigen::MatrixXd &  currZ 
)

Backward pass through the layer.

This method computes the gradient of the loss with respect to the inputs of this layer given the gradients from the next layer.

Parameters
nextWThe weights of the next layer.
nextdZThe gradients from the next layer.
currZThe linear output (Z) of this layer.
Returns
The gradient of the loss with respect to the inputs of this layer (dZ).

◆ forward()

std::pair< Eigen::MatrixXd, Eigen::MatrixXd > FlexNN::Layer::forward ( const Eigen::MatrixXd &  input)

Forward pass through the layer.

This method computes the output of the layer given an input matrix. It applies the activation function to the linear combination of inputs and weights.

Parameters
inputThe input data for the forward pass.
Returns
A pair containing the linear output (Z) and the activated output (A).

◆ getBiases()

Eigen::VectorXd FlexNN::Layer::getBiases ( ) const
inline

Getters for biases.

These methods return the biases of the layer.

Returns
Eigen::VectorXd The biases of the layer.

◆ getWeights()

Eigen::MatrixXd FlexNN::Layer::getWeights ( ) const
inline

Getters for weights.

These methods return the weights of the layer.

Returns
Eigen::MatrixXd The weights of the layer.

◆ updateWeights()

void FlexNN::Layer::updateWeights ( const Eigen::MatrixXd &  dW,
const Eigen::VectorXd &  db,
double  learningRate 
)
inline

Update weights and biases.

This method updates the weights and biases of the layer using the provided gradients and a specified learning rate.

Parameters
dWThe gradient of the weights.
dbThe gradient of the biases.
learningRateThe learning rate for updating the weights and biases.

The documentation for this class was generated from the following files: