1. Learnable Gaussian Feature Embedding \(\texttt{FE}_\phi\)
Let \(\phi = \{(\mu_i, \Sigma_i, f_i): i=1, \dots, N\}\) be the set of Gaussian model parameters, where \(\mu_i \in \mathbb{R}^d\) is a position of a Gaussian and \(\Sigma_i \in \mathbb{S}^{d}_{++}\) is a covariance matrix. Each Gaussian has a learnable feature embedding \(f_i \in \mathbb{R}^k\) for a feature dimension \(k\). For simplicity, we consider \(k=1\). Given an input coordinate \(x \in \mathbb{R}^d\), the learnable embedding \(\texttt{FE}_\phi:\mathbb{R}^d \rightarrow \mathbb{R}\) extracts Gaussian features as follows.
where \(N\) is the number of Gaussians and \(G_i\) represents the \(i\)-th Gaussian function. \(\texttt{FE}_\phi\) maps an input coordinate to a feature embedding by a weighted sum of the individual features \(f_i\) of each Gaussian. Extensions to \(k>1\) for enhanced expressiveness are provided in Appendix A.1. All Gaussian parameters \(\phi\) are learnable and iteratively updated throughout the training process. This dynamic adjustment, akin to adaptive mesh-based numerical methods, optimizes the structure of the underlying Gaussian functions to accurately approximate the solution functions.
2. Learnable Feature Refinement with \(\texttt{NN}_\theta\)
Once the features are extracted, a neural network processes the feature to produce the solution outputs.
where \(\texttt{NN}_\theta\) is a lightweight MLP with the parameter \(\theta\). We employed a single hidden layer MLP with a limited number of hidden units, resulting in negligible additional computational costs.