Who Said Neural Networks Aren't Linear?
*Equal contribution

Figure 1 from the paper. The Linearizer architecture $f(x)=g_{y}^{-1}(Ag_{x}(x))$ and the induced vector space operations.
Abstract
Neural networks are famously nonlinear. However, linearity is defined relative to a pair of vector spaces, $f:\mathcal{X}\rightarrow\mathcal{Y}.$ Is it possible to identify a pair of non-standard vector spaces for which a conventionally nonlinear function is, in fact, linear? This paper introduces a method that makes such vector spaces explicit by construction. We find that if we sandwich a linear operator $A$ between two invertible neural networks, such that $f(x)=g_{y}^{-1}(A g_{x}(x))$, the corresponding vector spaces are induced by newly defined operations. This framework makes the entire arsenal of linear algebra applicable to nonlinear mappings. We demonstrate this by collapsing diffusion model sampling into a single step, enforcing global idempotency for projective generative models, and enabling modular style transfer.
Key Results
One-Step Flow Matching & Inversion

The linearity of our model allows for exact inversion using the pseudoinverse, enabling latent space interpolation between real images with a single forward pass.
Globally Projective Generative Model

By enforcing idempotency ($A^2=A$) on the inner matrix, the entire network becomes a global projector by construction. Top: inputs. Bottom: their projections.
Citation
@misc{berman2025linearizer, title = {Who Said Neural Networks Aren't Linear?}, author = {Nimrod Berman and Assaf Hallak and Assaf Shocher}, year = {2025}, note = {Preprint, under review}, howpublished = {GitHub: assafshocher/Linearizer}, url = {https://github.com/assafshocher/Linearizer} }
Acknowledgements
We thank Amil Dravid and Yoad Tewel for insightful discussions. A.S. is supported by the Chaya Career Advancement Chair.