Iris Classon
Iris Classon - In Love with Code

Understanding Tensor<T> and TensorPrimitives in .NET

Recent versions of .NET introduced new APIs for working with numerical data: Tensor<T> and TensorPrimitives in System.Numerics.Tensors. If you, like me, mostly work with typical application code, these can feel unfamiliar. I’ve read up on the topic to educate myself, and hopefully my notes in the form of this blog post can give you some answers as I try to explain what tensors and TensorPrimitives are and when they matter.

If you stay within typical application development, arrays and collections take you a long way. You rarely need to think about memory layout or how data is structured beyond a list or a dictionary.

The moment you start dealing with numeric data at scale, or data that has an inherent shape, things change. An image is a good example. It is not just a sequence of values, it has dimensions. The same applies to things like matrices, grids, or time series with multiple signals.

You can model this with nested arrays or by flattening everything into a single array and calculating indices yourself. Both approaches work, but they push complexity into your code. They also make it harder to reason about correctness and performance.

That is the gap these APIs are trying to close.

Tensor<T>

Tensor<T> is a way to represent multi-dimensional data without having to build your own abstractions.

using System.Numerics.Tensors;

var tensor = new Tensor<float>(new[] { 2, 3 });
tensor[0, 0] = 1.0f;
tensor[1, 2] = 5.0f;

The important part is the shape. You define the dimensions up front, and indexing follows that structure. You are no longer relying on conventions or manual calculations to understand how the data is laid out.

Under the hood, the data is still stored in a contiguous block of memory. That detail matters because it makes iteration predictable and efficient, and it allows the type to work well with spans and other low-level features in .NET.

If you have ever used jagged arrays for this kind of data, Tensor<T> is essentially a more explicit and controlled version of the same idea.

TensorPrimitives

TensorPrimitives addresses a different problem. It is about how you perform operations on numeric data.

A typical implementation for adding two arrays looks like this:

for (int i = 0; i < a.Length; i++)
{
    result[i] = a[i] + b[i];
}

With TensorPrimitives, the same operation becomes:

TensorPrimitives.Add(a, b, result);

The difference is not just syntactic. These methods are implemented to take advantage of vectorization where possible. That means they can use CPU instructions that operate on multiple values at once.

You get a more direct expression of intent, and you get better performance characteristics without having to think about the details.

If your work is mostly CRUD, APIs, or UI, you can safely ignore this for now. The existing collection types remain the right tools.

It becomes relevant when your data has structure beyond a single dimension, or when you are doing repeated numeric operations over larger datasets. In those cases, having a clear representation of shape and access to optimized operations can simplify your code and improve performance at the same time.

It is easy to associate the term “tensor” with machine learning frameworks. That is not what this is.

There is no built-in support for GPU execution or training models. These APIs are lower level. They give you a better way to represent and process numeric data, but they do not try to be a full ML stack.

For most .NET developers, this is not something you need to adopt immediately. It is more useful to be aware that these tools exist and understand the kind of problems they are designed to solve.

Comments

Leave a comment below, or by email.

Last modified on 2025-12-08

comments powered by Disqus