Neural field
Part of a series on |
Machine learning and data mining |
---|
In machine learning, a neural field (also known as implicit neural representation, neural implicit, or coordinate-based neural network), is a mathematical field that is fully or partially parametrized by a neural network. Initially developed to tackle visual computing tasks, such as rendering or reconstruction (e.g., neural radiance fields), neural fields emerged as a promising strategy to deal with a wider range of problems, including surrogate modelling of partial differential equations, such as in physics-informed neural networks.[1]
Differently from traditional machine learning algorithms, such as feed-forward neural networks, convolutional neural networks, or transformers, neural fields do not work with discrete data (e.g. sequences, images, tokens), but map continuous inputs (e.g., spatial coordinates, time) to continuous outputs (i.e., scalars, vectors, etc.). This makes neural fields not only discretization independent, but also easily differentiable. Moreover, dealing with continuous data allows for a significant reduction in space complexity, which translates to a much more lightweight network.[1]
Formulation and training
[edit]According to the universal approximation theorem, provided adequate learning, sufficient number of hidden units, and the presence of a deterministic relationship between the input and the output, a neural network can approximate any function to any degree of accuracy.[2]
Hence, in mathematical terms, given a field , with and , a neural field , with parameters , is such that[1]:
Training
[edit]For supervised tasks, given examples in the training dataset (i.e., ), the neural field parameters can be learned by minimizing a loss function (e.g., mean squared error). The parameters that satisfy the optimization problem are found as[1][3][4]:Notably, it is not necessary to know the analytical expression of , for the previously reported training procedure only requires input-output pairs. Indeed, a neural field is able to offer a continuous and differentiable surrogate of the true field, even from purely experimental data.[1]
Moreover, neural fields can be used in unsupervised settings, with training objectives that depend on the specific task. For example, physics-informed neural networks may be trained on just the residual.[4]
Spectral bias
[edit]As for any artificial neural network, neural fields may be characterized by a spectral bias (i.e., the tendency to preferably learn the low frequency content of a field), possibly leading to a poor representation of the ground truth.[5] In order to overcome this limitation, several strategies have been developed. For example, SIREN uses sinusoidal activations, while the Fourier-features approach embeds the input through sines and cosines.[6][7]
Conditional neural fields
[edit]In many real-world cases, however, learning a single field is not enough. For example, when reconstructing 3D vehicle shapes from Lidar data, it is desirable to have a machine learning model that can work with arbitrary shapes (e.g., a car, a bicycle, a truck, etc.). The solution is to include additional parameters, the latent variables (or latent code) , to vary the field and adapt it to diverse tasks.[1]
Latent code production
[edit]

When dealing with conditional neural fields, the first design choice is represented by the way in which the latent code is produced. Specifically, two main strategies can be identified[1]:
- Encoder: the latent code is the output of a second neural network, acting as an encoder. During training, the loss function is the objective used to learn the parameters of both the neural field and the encoder.[8]
- Auto-decoding: each training example has its own latent code, jointly trained with the neural field parameters. When the model has to process new examples (i.e., not originally present in the training dataset), a small optimization problem is solved, keeping the network parameters fixed and only learning the new latent variables.[9]
Since the latter strategy requires additional optimization steps at inference time, it sacrifices speed, but keeps the overall model smaller. Moreover, despite being simpler to implement, an encoder may harm the generalization capabilities of the model.[1] For example, when dealing with a physical scalar field (e.g., the pressure of a 2D fluid), an auto-decoder-based conditional neural field can map a single point to the corresponding value of the field, following a learned latent code .[10] However, if the latent variables were produced by an encoder, it would require access to the entire set of points and corresponding values (e.g. as a regular grid or a mesh graph), leading to a less robust model.[1]
Global and local conditioning
[edit]In a neural field with global conditioning, the latent code does not depend on the input and, hence, it offers a global representation (e.g., the overall shape of a vehicle). However, depending on the task, it may be more useful to divide the domain of in several subdomains, and learn different latent codes for each of them (e.g., splitting a large and complex scene in sub-scenes for a more efficient rendering). This is called local conditioning.[1]
Conditioning strategies
[edit]There are several strategies to include the conditioning information in the neural field. In the general mathematical framework, conditioning the neural field with the latent variables is equivalent to mapping them to a subset of the neural field parameters[1]:In practice, notable strategies are:
- Concatenation: the neural field receives, as input, the concatenation of the original input with the latent codes . For feed-forward neural networks, this is equivalent to setting as the bias of the first layer and as an affine transformation.[1]
- Hypernetworks: a hypernetwork is a neural network that outputs the parameters of another neural network.[11] Specifically, it consists of approximating with a neural network , where are the trainable parameters of the hypernetwork. This approach is the most general, as it allows to learn the optimal mapping from latent codes to neural field parameters. However, hypernetworks are associated to larger computational and memory complexity, due to the large number of trainable parameters. Hence, leaner approaches have been developed. For example, in the Feature-wise Linear Modulation (FiLM), the hypernetwork only produces scale and bias coefficients for the neural field layers.[1][12]
Meta-learning
[edit]Instead of relying on the latent code to adapt the neural field to a specific task, it is also possible to exploit gradient-based meta-learning. In this case, the neural field is seen as the specialization of an underlying meta-neural-field, whose parameters are modified to fit the specific task, through a few steps of gradient descent.[13][14] An extension of this meta-learning framework is the CAVIA algorithm, that splits the trainable parameters in context-specific and shared groups, improving parallelization and interpretability, while reducing meta-overfitting. This strategy is similar to the auto-decoding conditional neural field, but the training procedure is substantially different.[15]
Applications
[edit]Thanks to the possibility of efficiently modelling diverse mathematical fields with neural networks, neural fields have been applied to a wide range of problems:

- 3D scene reconstruction: neural fields can be used to model the properties of 3D scenes (i.e., geometry, appearance, materials, and lighting), in both static and dynamic cases.[1] For example, a neural field can learn signed distance functions (SDFs)[9] or occupancy functions[16], which provide an efficient and continuous representation of the geometry. Another example is represented by neural radiance fields (NeRFs), that learn to render 3D scenes, by mapping coordinates and viewing angles to the corresponding radiance and density.[17]
- Digital humans: neural fields can be used to model human shape and appearance and can include information on the complex movements of a human body.[1]
- Generative modelling: by leveraging conditioning, neural fields can also work as deep generative models.[1]
- Image processing: with respect to convolutional neural networks, neural fields offer a continuous representation of the image and, hence, are not limited to the original pixel discretization.[1]
- Robotics: the strengths of neural fields in scene reconstruction are also useful in robotics, as navigation requires reconstructing the surroundings from sensor data. Moreover, neural fields can be used for planning and control.[1]
- Lossy data compression[1]
- Signal processing[1]

- Scientific computing: scientific machine learning (SciML) recently emerged as the combination of physics-based and data-driven models, to numerically solve differential equations.[4] In this context, the ability of neural fields to model input and solution in a continuous and differentiable manner is invaluable. For example, physics-informed neural networks (PINNs) use neural fields to include, in the training objective, the residual computed via automatic differentiation.[18] Instead, encode-process-decode architectures (e.g. CORAL), built on conditional neural fields, have been explored as an alternative operator-learning technique.[19][10]
See also
[edit]- Artificial intelligence
- Machine learning
- Neural network (machine learning)
- Neural radiance field
- Neural operators
References
[edit]- ^ a b c d e f g h i j k l m n o p q r s t Xie, Yiheng; Takikawa, Towaki; Saito, Shunsuke; Litany, Or; Yan, Shiqin; Khan, Numair; Tombari, Federico; Tompkin, James; Sitzmann, Vincent; Sridhar, Srinath (2022). "Neural Fields in Visual Computing and Beyond". Computer Graphics Forum. 41 (2): 641–676. doi:10.1111/cgf.14505. ISSN 1467-8659.
- ^ Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (1989-01-01). "Multilayer feedforward networks are universal approximators". Neural Networks. 2 (5): 359–366. doi:10.1016/0893-6080(89)90020-8. ISSN 0893-6080.
- ^ Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep learning. Adaptive computation and machine learning. Cambridge, Mass: The MIT press. ISBN 978-0-262-03561-3.
- ^ a b c Quarteroni, Alfio; Gervasio, Paola; Regazzoni, Francesco (2025-04-02), Combining physics-based and data-driven models: advancing the frontiers of research with Scientific Machine Learning, arXiv, doi:10.48550/arXiv.2501.18708, arXiv:2501.18708, retrieved 2025-07-10
- ^ Rahaman, Nasim; Baratin, Aristide; Arpit, Devansh; Draxler, Felix; Lin, Min; Hamprecht, Fred A.; Bengio, Yoshua; Courville, Aaron (2019-05-31), On the Spectral Bias of Neural Networks, arXiv, doi:10.48550/arXiv.1806.08734, arXiv:1806.08734, retrieved 2025-07-10
- ^ Sitzmann, Vincent; Martel, Julien N. P.; Bergman, Alexander W.; Lindell, David B.; Wetzstein, Gordon (2020-06-17), Implicit Neural Representations with Periodic Activation Functions, arXiv, doi:10.48550/arXiv.2006.09661, arXiv:2006.09661, retrieved 2025-07-09
- ^ Tancik, Matthew; Srinivasan, Pratul P.; Mildenhall, Ben; Fridovich-Keil, Sara; Raghavan, Nithin; Singhal, Utkarsh; Ramamoorthi, Ravi; Barron, Jonathan T.; Ng, Ren (2020-06-18), Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains, arXiv, doi:10.48550/arXiv.2006.10739, arXiv:2006.10739, retrieved 2025-07-09
- ^ Qi, Charles R.; Su, Hao; Mo, Kaichun; Guibas, Leonidas J. (2017-04-10), PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, arXiv, doi:10.48550/arXiv.1612.00593, arXiv:1612.00593, retrieved 2025-07-09
- ^ a b Park, Jeong Joon; Florence, Peter; Straub, Julian; Newcombe, Richard; Lovegrove, Steven (2019-01-16), DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation, arXiv, doi:10.48550/arXiv.1901.05103, arXiv:1901.05103, retrieved 2025-07-09
- ^ a b Serrano, Louis; Boudec, Lise Le; Koupaï, Armand Kassaï; Wang, Thomas X.; Yin, Yuan; Vittaut, Jean-Noël; Gallinari, Patrick (2023-11-30), Operator Learning with Neural Fields: Tackling PDEs on General Geometries, arXiv, doi:10.48550/arXiv.2306.07266, arXiv:2306.07266, retrieved 2025-07-10
- ^ Ha, David; Dai, Andrew; Le, Quoc V. (2016-12-01), HyperNetworks, arXiv, doi:10.48550/arXiv.1609.09106, arXiv:1609.09106, retrieved 2025-07-09
- ^ Perez, Ethan; Strub, Florian; Vries, Harm de; Dumoulin, Vincent; Courville, Aaron (2017-12-18), FiLM: Visual Reasoning with a General Conditioning Layer, arXiv, doi:10.48550/arXiv.1709.07871, arXiv:1709.07871, retrieved 2025-07-09
- ^ Finn, Chelsea; Abbeel, Pieter; Levine, Sergey (2017-07-18), Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, arXiv, doi:10.48550/arXiv.1703.03400, arXiv:1703.03400, retrieved 2025-07-09
- ^ Sitzmann, Vincent; Chan, Eric R.; Tucker, Richard; Snavely, Noah; Wetzstein, Gordon (2020-06-17), MetaSDF: Meta-learning Signed Distance Functions, arXiv, doi:10.48550/arXiv.2006.09662, arXiv:2006.09662, retrieved 2025-07-09
- ^ Zintgraf, Luisa M.; Shiarlis, Kyriacos; Kurin, Vitaly; Hofmann, Katja; Whiteson, Shimon (2019-06-10), Fast Context Adaptation via Meta-Learning, arXiv, doi:10.48550/arXiv.1810.03642, arXiv:1810.03642, retrieved 2025-07-10
- ^ Mescheder, Lars; Oechsle, Michael; Niemeyer, Michael; Nowozin, Sebastian; Geiger, Andreas (2019-04-30), Occupancy Networks: Learning 3D Reconstruction in Function Space, arXiv, doi:10.48550/arXiv.1812.03828, arXiv:1812.03828, retrieved 2025-07-09
- ^ Mildenhall, Ben; Srinivasan, Pratul P.; Tancik, Matthew; Barron, Jonathan T.; Ramamoorthi, Ravi; Ng, Ren (2020-08-03), NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, arXiv, doi:10.48550/arXiv.2003.08934, arXiv:2003.08934, retrieved 2025-07-09
- ^ Raissi, M.; Perdikaris, P.; Karniadakis, G. E. (2019-02-01). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations". Journal of Computational Physics. 378: 686–707. doi:10.1016/j.jcp.2018.10.045. ISSN 0021-9991.
- ^ Yin, Yuan; Kirchmeyer, Matthieu; Franceschi, Jean-Yves; Rakotomamonjy, Alain; Gallinari, Patrick (2023-02-15), Continuous PDE Dynamics Forecasting with Implicit Neural Representations, arXiv, doi:10.48550/arXiv.2209.14855, arXiv:2209.14855, retrieved 2025-07-10
External links
[edit]- Brown University's database of neural-field architectures
- Neural Radiance Fields: visual computing applications
- GitHub-Awesome list of Implicit Neural Representations