1. Learnable Gaussian Feature Embedding \(\texttt{FE}_\phi\)
Let \(\phi = \{(\mu_i, \Sigma_i, f_i): i=1, \dots, N\}\) be the set of Gaussian model parameters, where \(\mu_i \in \mathbb{R}^{d}\) is a position of a Gaussian, \(\Sigma_i \in \mathbb{S}^{d}_{++}\) is a covariance matrix, and each Gaussian has a learnable feature embedding \(f_i \in \mathbb{R}^{k}\). Given an input coordinate \(x \in \mathbb{R}^d\), the learnable Gaussian feature embedding \(\texttt{FE}_\phi:\mathbb{R}^d \rightarrow \mathbb{R}^{k}\) is extracted as follows:
where \(k\) is the input dimension of MLP, \(N\) is the number of Gaussians and \(G_i\) represents the \(i\)-th Gaussian function. \(\texttt{FE}_\phi\) maps an input coordinate to a feature embedding by a weighted sum of the individual features \(f_i\) of each Gaussian. To enhance the expressive capability, we can use different Gaussians for each feature dimension. Further details are provided in Appendix A.1. All Gaussian parameters \(\phi\) are learnable and iteratively updated throughout the training process. This dynamic adjustment, akin to adaptive mesh-based numerical methods, optimizes the structure of the underlying Gaussian functions to accurately approximate the solution functions.
2. Solution Approximation with Gaussians followed by a Lightweight Neural Network \(\texttt{NN}_\theta\)
Once the features are extracted, a neural network processes the feature to produce the solution outputs.
where \(\texttt{NN}_\theta\) is a lightweight MLP with the parameter \(\theta\). We employed a single hidden layer MLP with a limited number of hidden units, resulting in negligible additional computational costs.