Image by Author
Rust Burn is a new deep learning framework written entirely in the Rust programming language. The motivation behind creating a new framework rather than using existing ones like PyTorch or TensorFlow is to build a versatile framework that caters well to various users including researchers, machine learning engineers, and low-level software engineers.
The key design principles behind Rust Burn are flexibility, performance, and ease of use.
Flexibility comes from the ability to swiftly implement cutting-edge research ideas and run experiments.
Performance is achieved through optimizations like leveraging hardware-specific features such as Tensor Cores on Nvidia GPUs.
Ease of use stems from simplifying the workflow of training, deploying, and running models in production.
Key Features:
- Flexible and dynamic computational graph
- Thread-safe data structures
- Intuitive abstractions for simplified development process
- Blazingly fast performance during training and inference
- Supports multiple backend implementations for both CPU and GPU
- Full support for logging, metric, and checkpointing during training
- Small but active developer community
Installing Rust
Burn is a powerful deep learning framework that is based on Rust programming language. It requires a basic understanding of Rust, but once you’ve got that down, you’ll be able to take advantage of all the features that Burn has to offer.
To install it using an official guide. You can also check out GeeksforGeeks guide for installing Rust on Windows and Linux with screenshots.
Image from Install Rust
Installing Burn
To use Rust Burn, you first need to have Rust installed on your system. Once Rust is correctly set up, you can create a new Rust application using cargo, Rust’s package manager.
Run the following command in your current directory:
Navigate into this new directory:
Next, add Burn as a dependency, along with the WGPU backend feature which enables GPU operations:
cargo add burn --features wgpu
In the end, compile the project to install Burn:
This will install the Burn framework along with the WGPU backend. WGPU allows Burn to execute low-level GPU operations.
Element Wise Addition
To run the following code you have to open and replace content in src/main.rs
:
use burn::tensor::Tensor;
use burn::backend::WgpuBackend;
// Type alias for the backend to use.
type Backend = WgpuBackend;
fn main() {
// Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
let tensor_1 = Tensor::::from_data([[2., 3.], [4., 5.]]);
let tensor_2 = Tensor::::ones_like(&tensor_1);
// Print the element-wise addition (done with the WGPU backend) of the two tensors.
println!("{}", tensor_1 + tensor_2);
}
In the main function, we have created two tensors with WGPU backend and performed addition.
To execute the code, you must run cargo run
in the terminal.
Output:
You should now be able to view the outcome of the addition.
Tensor {
data: [[3.0, 4.0], [5.0, 6.0]],
shape: [2, 2],
device: BestAvailable,
backend: "wgpu",
kind: "Float",
dtype: "f32",
}
Note: the following code is an example from Burn Book: Getting started.
Position Wise Feed Forward Module
Here is an example of how easy it is to use the framework. We declare a position-wise feed-forward module and its forward pass using this code snippet.
use burn::nn;
use burn::module::Module;
use burn::tensor::backend::Backend;
#[derive(Module, Debug)]
pub struct PositionWiseFeedForward<B: Backend> {
linear_inner: Linear<B>,
linear_outer: Linear<B>,
dropout: Dropout,
gelu: GELU,
}
impl PositionWiseFeedForward<B> {
pub fn forward(&self, input: Tensor<B, D>) -> Tensor<B, D> {
let x = self.linear_inner.forward(input);
let x = self.gelu.forward(x);
let x = self.dropout.forward(x);
self.linear_outer.forward(x)
}
}
The above code is from the GitHub repository.
Example Projects
To learn about more examples and run them, clone the https://github.com/burn-rs/burn repository and run the projects below:
Pre-trained Models
To build your AI application, you can use the following pre-trained models and fine-tune them with your dataset.
Rust Burn represents an exciting new option in the deep learning framework landscape. If you are already a Rust developer, you can leverage Rust’s speed, safety, and concurrency to push the boundaries of what’s possible in deep learning research and production. Burn sets out to find the right compromises in flexibility, performance, and usability to create a uniquely versatile framework suitable for diverse use cases.
While still in its early stages, Burn shows promise in tackling pain points of existing frameworks and serving the needs of various practitioners in the field. As the framework matures and the community around it grows, it has the potential to become a production-ready framework on par with established options. Its fresh design and language choice offer new possibilities for the deep learning community.
Resources
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master’s degree in Technology Management and a bachelor’s degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.