Kevin You's Research Projects

[Back to main page]

This is a high-level overview of some research I have done. Also check out my CV!
  1. Multiplication in fractional Sobolev spaces
  2. Zeros and a-values of approximations for a class of L-fucntions
  3. Intersection of doubling measures
  4. Next event estimation for walk on spheres
  5. Energy conserving time integration
  6. Panel methods for computer graphics

Multiplication in fractional Sobolev spaces

Fractional Sobolev spaces are important due to their role for measuring regularity and arises naturally for example via the trace operator (restriction of a function to the boundary of its domain). There are multiple ways to define the fractional Sobolev space, such as with the Fourier transform and Littlewoord-Paley theory or as interpolation spaces between integer Sobolev spaces. We instead consider a more elementary approach using the intrinsic seminorm. For \( 0 < s < 1 \), the fractional Sobolev space \(W^{s,p} (\mathbb{R}^N)\) is the space of all \( u \in L^p(\mathbb{R}^N) \) such that \[ \vert u \vert_{W^{s,p}} := \int_{\mathbb{R}^N} \int_{\mathbb{R}^N} \frac{ \vert u(x) - u(y) \vert^p }{\vert x - y \vert^{1+sp}} dx dy < \infty. \] with norm \( \Vert u \Vert_{W^{s,p}} = \Vert u \Vert_{L^p} + \vert u \vert_{W^{s,p}} \). Working with this definition, in \(N = 1\) we derive some necessary and sufficient conditions for the continuous embedding \[ W^{s_1,p_1} \times W^{s_2, p_2} \hookrightarrow W^{s,p} \] for pointwise multiplication of fractional Sobolev spaces.

Zeros and a-values of approximations for a class of L-functions

The well-known Riemann hypothesis states that all non-trivial zeros of the Riemann zeta function \(\zeta(\sigma + it)\) lie on the critical line \(\sigma = \frac{1}{2}\). A weaker form of the hypothesis says that in the limit that \( T \rightarrow \infty \), one hundred percent of zeros with \( 0 < t < T \) lie on the critical line. Currently it is known that asymptotically at least two-fifths of such zeros lie on the critical line. Instead of zero, we may also consider for a fixed non-zero complex number \( a \), where do \( \zeta(\sigma + it) = a \), called a-values, occur. Unlike zeros, we know that these a-values like very close to the critical line, but it is conjectured by Selberg that zero percent of them lie exactly on the line. Currently it is known that asymptotically at most half of a-values lie on the critical line. Instead of working with \( \zeta \) itself, we work with approximations of the zeta function \[ \zeta_N(s) = \sum_{i=1}^N n^{-s} + \chi(s) \sum_{n=1}^N n^{s-1} \] which arrises from the Hardy-Littlewood approximate functional equation. We prove that these approximations satisfy the property that zero percent of all a-values lie on the critical line. Our central tool is to upper and lower bound various analytic functions and use Jensen's formula to control the number of zeros of the function.

Intersection of doubling measures

A locally-integrable function \( u \) defined on \( \mathbb{R} \) is in the space of \(BMO\) (bounded mean oscillation) if \[ \Vert u \Vert_{BMO} := \sup_{I} \frac{1}{\vert I \vert} \int_I \vert u(y) - u_I \vert dy < \infty \] where \( u_I \) denotes the average of \( u \) on \( I \), and the supremum is taken over all bounded intervals. In harmonic analysis, it is often difficult to consider all intervals, and it is desirable to instead consider n-adic intervals of the form \( [ \frac{k-1}{n^m}, \frac{k}{n^m} ) \) for \(m, k \in \mathbb{Z} \). This leads to the natural definition of \(BMO_n\). A folkloric question asks whether it suffices to examine all n-adic intervals, that is, whether \[ \bigcap_{n \geq 2} BMO_n = BMO? \] By constructing a family of counter-examples with very sharp peaks near the endpoint of n-adic intervals (controlling the position of these points requires number theory), and demonstrating that these examples fail a reverse Holder inequality, we show that the answer is no.

Next event estimation for walk on spheres

Among the most important of all PDEs in mathematics and physics is Laplace's equation \[ \begin{aligned} \Delta u &= 0 \text{ if } x \in \Omega \\ u &= g \text{ if } x \in \partial \Omega \end{aligned} \] It represents situations of equilibrium such as steady state heat flow or fluid flow. Numerically, Laplace's equation is usually solved with finite element methods. but these methods heavily depend on the quality of the volumetric mesh. Instead, Monte Carlo methods such as the walk on spheres method may be preferable since it only requires a surface mesh. Harmonic functions (solutions to Laplace's equation) satisfy a mean-value property where \( u(x) \) is equal to the average of \( u(y) \) over all \( y \in \partial B(x,r) \), assuming \( B(x,r) \subseteq \Omega \). Thus, an estimator of \(u(x)\) is \( u(x_1) \) if we pick \(x_1 \in B(x,r) \) uniformly at random. We can repeat this scheme and estimate \( u(x_1) \) with some \( x_2 \in B(x_1,r') \), continuing ad infinitum. Suppose that we pick \( x_n \) by a random walk and take its intersections with the balls. Then the sequence \( x_n \) converges almost surely to some point \( z \in \partial \Omega \), and an estimator of \( u(x) \) is \( u(z) = g(z) \).

While the walk on spheres algorithm is very elegant, it has much unused potential. Firstly, a single random walk is costly due to the many geometric queries performed but only returns one estimate of the solution. Secondly, the walk has no knowledge of the boundary conditions, which is problematic when the boundary conditions are irregular. We devised new mechanisms of utilizing next event estimation and added in boundary data via multiple importance sampling to address the two issues. Preliminary results suggest variance reduction. Shown below is a comparison with naive walk on spheres with our method both at 300 samples per point for a boundary term with small support that can be sampled from.

walk on spheres comparison
Figure 1. Laplace's equation solved with naive (left) and our method (right)

Energy conserving time integration

Simulation of elastic dynamic bodies by numeric time integration is difficult because the system of differential equations is stiff. This makes many common numerical methods to solve the system unstable, unless very small time steps are taken. Therefore, in graphics implicit methods are preferred, which are almost always stable, but they tend to dissipate the motion away from the system. Consider the numerical scheme \[ \begin{aligned} x_{n+1} &= x_n + h (\kappa v_{n+1} + (1-\kappa) v_n) \\ v_{n+1} &= x_n - h m^{-1} (\kappa \nabla U( x_{n+1}) + (1-\kappa) \nabla U( x_n)) \end{aligned} \] where \( \kappa \in [0,1] \) is a parameter. If \( \kappa = 0 \), then the scheme is explicit Euler, which is fast but unstable. If \( \kappa = 1 \), then the scheme is implicit Euler, which is both A-stable and L-stable, but requires solving an implicit system and tends to lose energy. If \( \kappa = 1/2 \), then the scheme is trapezoidal, which is symplectic. It conserves energy perfectly when the force \( -\nabla U(x) \) is linear, but in practice for stiff systems and large time steps it also leads to blowups. The difficulty is that we do not know the appropriate value of \( \kappa \) beforehand.

In our work, we develop a new numerical integrator that is symplectic, but much more resilient than trapezoidal or midpoint integrators. Basing off of the symplectic integrator, we develop a new energy correction mechanism via interpolation that maintains inversion-free and interpenetration-free guarantees and is compatible with the incremental potential contact model. Shown below is a scene without friction with backward forward difference 2 and our method. BDF-2 is better than implicit Euler but still leads to uncontrollable numerical dissipation and only retains 15% of energy at the end of the simulation (middle). Using a much smaller time step is necessary to get to 50% energy dissipation (left). For our method we use a controlled energy decay at the original time step that decays to 50% energy (right). The simulation with our new method is more dynamic and interesting for animation purposes than BDF-2 at the same time step. (Because of the very large time step, we can't expect either method to be close to the true solution. In particular, our method has a tendency to dissipate high frequency motion and amplify low frequency motion. )

Figure 2. Bunny and ball drop scene computed with BDF-2 h=1/120s (middle), our methods h=1/120s with controlled decay (right), and reference BDF-2 h=1/2400s and h=1/240s (left).

Panel methods for computer graphics

The vorticity form of the incompressible Navier-Stokes equations states that \[ \frac{D \omega}{Dt} = \omega \cdot (\nabla u) + \nu \nabla^2 \omega. \] where \( \nu \) is the kinematic viscosity. It is sometimes desirable to simulate the vorticity \( \omega = \nabla \times u \) directly since it captures the complexity of the fluid flow. From the boundary, vorticity diffuses into the domain with length scale \( \sim (\nu t)^{1/2} \). At very large Reynolds numbers, a very thin sheet of vorticity forms in the so-called boundary layers. Instabilities and singularities occur in the boundary layer, which causes fluid material and consequently vorticity to be erupted from the boundary layer in spikes. This phenomenon of vorticity separation is not yet well-understood.

Panel methods in aerodynamics model the vorticity separation at the sharp trailing edge of the wing as a flat sheet of wake, and the forces due to the wake can correctly account for the drag forces on the wing and explain d'Alembert's paradox. These traditional methods work for cusped edges in 2D, and only recently has the vorticity separation model been extended to non-cusped edges. In our work, we revisit the vorticity separation model for non-cusped edges and also investigate whether it can be extended to 3D for arbitrary geometric meshes. If possible, this will allow for efficient simulation of solid-fluid coupling at vanishing viscosities.