Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Does every divergence-free vector field arise as the curl of some vector field?

+6
−0

I was introduced to the concepts of gradient $\nabla f$, curl $\nabla \times F$ and divergence $\nabla \cdot F$ in an introductory course on calculus during my undergraduate studies. There I learnt that for any scalar function $f$ on $\mathbb{R}^2$ or $\mathbb{R}^3$, we have $\nabla \times (\nabla f) = 0$. Moreover, we saw the following standard example of a vector field $F$ on an open subset of $\mathbb{R}^2$ such that $\nabla \times F = 0$ but for which $F \neq \nabla f$ for any scalar function $f$, namely, $$ F(x, y) = \left( \frac{-y}{x^2 + y^2}, \frac{x}{x^2 + y^2} \right) $$ for every $(x, y) \in \mathbb{R}^2 \setminus \{ (0,0) \}$.

Analogously, it seems to me, we also showed that for every vector field $F$ on $\mathbb{R}^2$ or $\mathbb{R}^3$, we have $\nabla \cdot (\nabla \times F) = 0$. However, we did not consider the converse: if $F$ is a vector field on some subset of $\mathbb{R}^2$ or $\mathbb{R}^3$ such that $\nabla \cdot F = 0$, is it true that $F = \nabla \times G$ for some vector field $G$?

My guess is that this is not true, just as the analogous earlier question regarding the curl and the gradient had a negative answer. However, I'm not able to come up with any counterexample. I understand that (in the $\mathbb{R}^2$ case) I'm interested in finding a vector field $G(x, y) = G_1(x, y) \hat{\iota} + G_2(x, y) \hat{\jmath}$ such that $$ \frac{\partial G_2}{\partial x} - \frac{\partial G_1}{\partial y} = F_3(x, y), $$ where $F = F_3(x, y) \hat{k}$ is the vector field I'm starting out with. This reminds me of an exact differential equation, but I admit I'm not familiar enough with the concept to be able to proceed further.

More generally, I would like to know whether we can put precise conditions on when a divergence-free vector field is the curl of some vector field. Like in the earlier case, I expect that the geometry of space should play an important role: there, asking that the domain be simply connected was a sufficient condition to guarantee the existence of the function $f$, so perhaps something similar will happen here as well?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

General comments (2 comments)

1 answer

+7
−0

tl;dr We can formulate your question more nicely with geometric algebra. As r~~ mentioned in a comment, there is a counter-example to your question on a simply-connected but not 2-connected domain. The counter-examples provided are Green's functions for the vector derivative. Formulating the problem in terms of differential forms, the Poincaré lemma shows that we will be able to find a potential whenever the domain is a contractible submanifold of $\mathbb R^n$. Going beyond this leads to de Rham cohomology.

First, let's get rid of the abomination known as the cross product by moving to geometric algebra. See the link, but as a very brief introduction in addition to the dot product or inner product of vectors $u \cdot v$, we'll add the wedge/outer/exterior product $u \wedge v$. This behaves like a product (i.e. it's bilinear and associative) except that it is anti-commutative, i.e. $u \wedge v = -v \wedge u$. Much like the inner product of vectors isn't a vector, the outer product of two vectors gives a directed area element known as a bivector. You can visualize this as a parallelogram in the plane spanned by the vectors that's oriented clockwise or counter-clockwise based on the direction of the vectors. We can wedge a bivector together with a vector to get a trivector, i.e. an oriented volume element, and so forth. In general, we can talk about $k$-blades which are wedge products of $k$ vectors, and $k$-vectors which are linear combinations of $k$-blades. (Something special about dimension $3$ and lower is that all $k$-vectors are $k$-blades in these dimensions.) Multivectors are linear combinations of $k$-vectors for various values of $k$. In an $n$-dimensional space, there is a unique (up to scaling) $n$-vector/blade known as the pseudoscalar. $I$ will be used generically to represent a unit pseudoscalar.

We can then define the geometric product, $uv$ of two vectors as $uv = u \cdot v + u \wedge v$. If you fix $u$ and think about the situation geometrically, you should be able to convince yourself that (as long as $u$ is non-zero), $v$ is uniquely defined, and that this is true in any dimension. Algebraically, this corresponds to the existence of an inverse $v^{-1} = v/v^2$. Via bilinearity, we can lift the geometric product into a product of multivectors in general, though we don't have inverses to arbitrary non-zero multivectors. The pseudoscalar induces a duality where if $A$ is a $k$-vector, then $AI$ is an $(n-k)$-vector. We sometimes talk of "pseudovectors" which are the $(n-1)$-vectors which we can identify via this duality with vectors. This is the basis of the cross product. The cross product is the dual of the wedge product, e.g. $u \times v = (u \wedge v)I$. However, this expression only produces a vector when $n=3$.

To do calculus, we can introduce the vector derivative $\nabla$ which behaves algebraically like a vector in geometric algebra. The quickest way to define it for someone familiar with traditional multivariable calculus is to introduce an orthonormal basis $\mathbf e_1, \dots, \mathbf e_n$ for our $n$-dimensional Euclidean space and define $\nabla = \sum_{i=1}^n \mathbf e_i \partial_i$ where $\partial_i$ is the partial derivative in the $\mathbf e_i$ direction. Given a vector field $F$, we have $\nabla F = \nabla \cdot F + \nabla \wedge F$. $\nabla \cdot F$ is the divergence as usual while we'll call $\nabla \wedge F$ the curl. As we saw above, when $n=3$ this definition is dual to the definition in terms of the cross product. The gradient is just the vector derivative applied to a scalar field. We can apply the vector derivative to arbitrary multivector fields.

So far all of this is just an intro to geometric calculus. That said, I'm doing this not only because the cross product is ugly and geometric algebra/calculus is generally useful, but if you explore your question further you will quickly encounter differential forms which operate in the exterior algebra. In most cases, this exterior algebra can be straightforwardly identified with the exterior algebra part of a geometric algebra. This geometric algebra perspective is usually more concrete and geometric than typical presentations of exterior algebra, and is more algebraically convenient. The geometric algebra perspective also allows closer/clearer connections to complex analysis.

Turning, finally, to your question, there's a bit of irregularity that geometric algebra can clear up. Let's revisit the scalar field case. Let $\rho$ be a scalar field. Then $\nabla\rho = \nabla \wedge \rho$ is a vector field and $\nabla \wedge (\nabla \wedge \rho) = (\nabla \wedge \nabla) \wedge \rho = 0$. Indeed, we have $\nabla \wedge \nabla = 0$ generally. Using the general geometric algebraic identity $a \cdot (BI) = (a \wedge B)I$, we see that this is the same formula as $\nabla \times (\nabla \rho) = 0$ but no longer restricted to $n=3$. For this case, your question would be: given a vector field $v$ such that $\nabla \wedge v = 0$, is there a scalar field $\rho$ such that $v = \nabla \rho$? Your counterexample can be written, for $n=2$, as $v(x) = xI/x^2 = x^{-1}I$. Moving to your actual question, we see that $\nabla \cdot (\nabla \times F) = 0$ is equivalent to $\nabla \wedge \nabla \wedge F = 0$. Further, choosing $HI=F$, your question becomes: given an $H$ such that $\nabla \wedge H = 0$, is there a $G$ such that $H = \nabla \wedge G$? Here $H$ is a bivector field. Unsurprisingly, as r~~ mentions in a comment referencing Counterexamples in Analysis by Gelbaum and Olsted, this also has a counterexample and on a simply connected domain for $n=3$ which we can write as $F(x) = x/|x|^3$ or, in a sense of deja vu, $H(x) = xI/|x|^3$. It is worth noting that while this example is on a simply connected (i.e. 1-connected) domain, the domain isn't 2-connected.

What's happening with these counterexamples is $\nabla \wedge v = 0$ and $\nabla \wedge H = 0$ are equivalent to $\nabla \cdot v' = 0$ and $\nabla \cdot F = 0$ where $v'I = v$. The particular examples given are the (proportional to) Green's functions for the operators $\nabla$ and $\nabla \cdot{}$ in dimensions $2$ and $3$ respectively. For $n$-dimensional Euclidean space, we generally have $\nabla (x/|x|^n) = \nabla \cdot (x/|x|^n) = S_{n-1}\delta(x)$ where $S_n$ is the surface area of the unit $n$-sphere. This gives us a counterexample for any $n$. It's worth exploring this a bit more.

The fundamental theorem of calculus for geometric algebra in flat Euclidean space can be written $$\int_{\mathcal M} \grave g (\mathrm d^m x\cdot \grave\nabla) \grave f = \oint_{\partial\mathcal M} g \mathrm d^{m-1} x f$$ where $g$ and $f$ are arbitrary multivector functions, $\mathcal M$ is a $m$-dimensional submanifold of an $n$-dimensional Euclidean space and $\partial \mathcal M$ is its boundary, $\mathrm d^m x$ is the $m$-vector differential (not a scalar!), i.e. the infinitesimal tangent element at $x$, and finally, the $\grave{}$ marks indicate that the vector derivative differentiates both $g$ and $f$. (The non-commutativity of geometric algebra leads to the vector derivative operating both to its left and to its right.)

Choosing $g=1$, $m=n$, and $f(x)=x/|x|^n$, we get $\int_\mathcal M \mathrm d^n x\nabla f = \oint_{\partial \mathcal M} \mathrm d^{n-1} x f$, and I want to give an intuitive argument why this leads to $\nabla f$ behaving like the Dirac delta function. Imagine $\mathcal M$ is a ball centered at the origin. Then, on its boundary, $f$ will be a vector pointing normal to its surface and thus orthogonal to $\mathrm d^{n-1} x$. $\mathrm d^{n-1} x f$ is thus constant on the surface of any sphere centered at the origin and thus so is the boundary integral. If you work it out more carefully, you'll see that the value is actually independent of the radius of the $n$-sphere. On the other hand, consider a sector of a spherical shell, i.e. the difference of two spherical sectors, centered at the origin. The sides of the sector will contribute nothing to the boundary integral because opposite sides will cancel out. The rounded ends will also cancel out, because they have opposite orientations but the integral is constant and independent of radius on them. At this point, we can imagine approximating arbitrary submanifolds not containing the origin with spherical shell sectors to show that they all have zero integral. For submanifolds containing the origin, we can imagine splitting the integral into an integral over a sufficiently small sphere around the origin and an integral over a submanifold not containing the origin. We thus see that the value of the integral is non-zero if and only if it is over a submanifold containing the origin. If you are familiar with complex analysis, the residue theorem should come to mind. And, indeed, the above shows that we have residue theorems in higher dimensions.

We can generalize the fundamental theorem of calculus a bit and connect it to other areas of math. Let $\underline \omega$ be a linear function field on $k$-vectors. That is, $\underline \omega(x; K)$ is a non-linear function of the point $x$ and a linear function of the $k$-vector $K$. We can write a more general fundamental theorem of calculus as $$\int_{\mathcal M} \underline{\grave\omega}(x; \mathrm d^m x \cdot \grave{\nabla}) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1}x)$$ Defining $\mathrm d\underline \omega(x; K) = \underline{\grave\omega}(x; K \cdot \grave{\nabla})$ produces a linear function field on $(k+1)$-vectors. We can then write the fundamental theorem of calculus as $$\int_{\mathcal M} \mathrm d\underline \omega(x; \mathrm d^m x) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1} x)$$ When $\underline \omega$ is scalar-valued, it is called a (differential) $k$-form. We can also identify $\underline \omega$ with a $k$-vector via $\underline \omega(x; K) = K \cdot \omega(x)$ where $\omega(x) = \sum_{E \in \mathcal B_k} \underline \omega(x; E) E$ where $\mathcal B_k$ is the set of orthonormal basis $k$-vectors. The operator $\mathrm d$ on differential forms is called the exterior derivative. We can easily show that $\mathrm d \mathrm d = 0$ as an operator equation.

Having introduced the notion of a differential form and the exterior derivative, we can introduce the notions of a closed and an exact form. A differential form $\underline \alpha$ is called closed if $\mathrm d \underline \alpha = 0$. It is called exact if there is a differential form $\underline \beta$ such that $\underline \alpha = \mathrm d\underline \beta$. If $\underline \alpha$ is a $0$-form, i.e. just a scalar function, then $\mathrm d\underline \alpha = \nabla\alpha$. If $\underline \alpha$ is a $1$-form, then $\mathrm d \underline \alpha = \nabla \wedge \alpha$. With this we see that your scalar example can be formulated as: is a closed $1$-form always exact? (Note, an exact form is always closed.) You've shown that it isn't in general, but your counter-example was on a domain that is not simply connected. Are there counter-examples on (non-empty) simply connected domains in the plane? The Poincaré lemma shows that there isn't. In fact, it shows further that all closed $k$-forms are exact on contractible submanifolds of $\mathbb R^n$. r~~'s counter-example is on a space that is simply-connected but not contractible. In general, we don't have $\mathrm d \underline \alpha = \nabla \wedge \alpha$, so the Poincaré lemma might not be relevant to r~~'s counter-example. However, if $\underline \alpha$ is an $(n-1)$-form on $\mathbb R^n$, then $\mathrm d \underline \alpha = I(\nabla \wedge \alpha)$. Going to more complex manifolds leads to the world of de Rham cohomology.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

General comments (1 comment)

Sign up to answer this question »