Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

#2: Post edited by user avatar Derek Elkins‭ · 2021-02-20T19:38:13Z (about 3 years ago)
Minor tweaks and clarifications.
  • **tl;dr** We can formulate your question more nicely with geometric algebra. As r~~ mentioned in a comment, there is a counter-example to your question on a simply-connected but not 2-connected domain. The counter-examples provided are Green's functions for the vector derivative. Formulating the problem in terms of differential forms, the Poincaré lemma shows that we will be able to find a potential whenever the domain is a contractible submanifold of $\mathbb R^n$. Going beyond this leads to de Rham cohomology.
  • First, let's get rid of the abomination known as the cross product by moving to [geometric algebra](http://geocalc.clas.asu.edu/html/Intro.html). See the link, but as a very brief introduction in addition to the **dot product** or **inner product** of vectors $u \cdot v$, we'll add the **wedge/outer/exterior product** $u \wedge v$. This behaves like a product (i.e. it's bilinear and associative) except that it is anti-commutative, i.e. $u \wedge v = -v \wedge u$. Much like the inner product of vectors isn't a vector, the outer product of two vectors gives a directed area element known as a **bivector**. You can visualize this as a parallelogram in the plane spanned by the vectors that's oriented clockwise or counter-clockwise based on the direction of the vectors. We can wedge a bivector together with a vector to get a **trivector**, i.e. an oriented volume element, and so forth. In general, we can talk about **$k$-blades** which are wedge products of $k$ vectors, and $k$-vectors which are linear combinations of $k$-blades. (Something special about dimension $3$ and lower is that all $k$-vectors are $k$-blades in these dimensions.) **Multivectors** are linear combinations of $k$-vectors for various values of $k$. In an $n$-dimensional space, there is a unique (up to scaling) $n$-vector/blade known as the **pseudoscalar**. $I$ will be used generically to represent a unit pseudoscalar.
  • We can then define the **geometric product**, $uv$ of two vectors as $uv = u \cdot v + u \wedge v$. If you fix $u$ and think about the situation geometrically, you should be able to convince yourself that (as long as $u$ is non-zero), $v$ is uniquely defined, and that this is true in any dimension. Algebraically, this corresponds to the existence of an inverse $v^{-1} = v/v^2$. Via bilinearity, we can lift the geometric product into a product of multivectors in general, though we don't have inverses to arbitrary non-zero multivectors. The pseudoscalar induces a duality where if $A$ is a $k$-vector, then $AI$ is an $(n-k)$-vector. We sometimes talk of "pseudovectors" which are the $(n-1)$-vectors which we can identify via this duality with vectors. This is the basis of the cross product. The cross product is the dual of the wedge product, e.g. $u \times v = (u \wedge v)I$. However, this expression only produces a vector when $n=3$.
  • To do calculus, we can introduce the **vector derivative** $
  • abla$ which behaves algebraically like a vector in geometric algebra. The quickest way to define it for someone familiar with traditional multivariable calculus is to introduce an orthonormal basis $\mathbf e_1, \dots, \mathbf e_n$ for our $n$-dimensional Euclidean space and define $
  • abla = \sum_{i=1}^n \mathbf e_i \partial_i$ where $\partial_i$ is the partial derivative in the $\mathbf e_i$ direction. (We'll return to this.) Given a vector field $F$, we have $
  • abla F =
  • abla \cdot F +
  • abla \wedge F$. $
  • abla \cdot F$ is the divergence as usual while we'll call $
  • abla \wedge F$ the curl. As we saw above, when $n=3$ this definition is dual to the definition in terms of the cross product. The gradient is just the vector derivative applied to a scalar field. We can apply the vector derivative to arbitrary multivector fields.
  • So far all of this is just an intro to geometric calculus. That said, I'm doing this not only because the cross product is ugly and geometric algebra/calculus is generally useful, but if you explore your question further you will quickly encounter differential forms which operate in the [exterior algebra](https://en.wikipedia.org/wiki/Exterior_algebra). In most cases, this exterior algebra can be straightforwardly identified with the exterior algebra part of a geometric algebra. This geometric algebra perspective is usually more concrete and geometric than typical presentations of exterior algebra, and is more algebraically convenient. The geometric algebra perspective also allows closer/clearer connections to complex analysis.
  • Turning, finally, to your question, there's a bit of incoherence that geometric algebra can clear up. Let's revisit the scalar field case. Let $ ho$ be a scalar field. Then $
  • abla ho =
  • abla \wedge ho$ is a vector field and $
  • abla \wedge (
  • abla \wedge ho) = (
  • abla \wedge
  • abla) \wedge ho = 0$. Indeed, we have $
  • abla \wedge
  • abla = 0$ generally. Using the general geometric algebraic identity $a \cdot (BI) = (a \wedge B)I$, we see that this is the same formula as $
  • abla \times (
  • abla ho) = 0$ but no longer restricted to $n=3$. For this case, your question would be: given a vector field $v$ such that $
  • abla \wedge v = 0$, is there a scalar field $ ho$ such that $v =
  • abla ho$? Your counterexample can be written, for $n=2$, as $v(x) = xI/x^2 = x^{-1}I$. Moving to your actual question, we see that $
  • abla \cdot (
  • abla \times F) = 0$ is equivalent to $
  • abla \wedge
  • abla \wedge F = 0$. Further, choosing $HI=F$, your question becomes: given an $H$ such that $
  • abla \wedge H = 0$, is there a $G$ such that $H =
  • abla \wedge G$? Here $H$ is a *bivector* field. Unsurprisingly, as r~~ mentions in a comment referencing *Counterexamples in Analysis* by Gelbaum and Olsted, this also has a counterexample and on a simply connected domain for $n=3$ which we can write as $F(x) = x/|x|^3$ or, in a sense of deja vu, $H(x) = xI/|x|^3$. It is worth noting that while this example is on a simply connected (i.e. 1-connected) domain, the domain isn't [2-connected](https://en.wikipedia.org/wiki/N-connected_space).
  • What's happening with these counterexamples, is $
  • abla \wedge v = 0$ and $
  • abla \wedge H = 0$ are equivalent to $
  • abla \cdot v' = 0$ and $
  • abla \cdot F = 0$ where $v'I = v$. The particular examples given are the (proportional to) [Green's functions](https://en.wikipedia.org/wiki/Green's_function) for the operators $
  • abla$ and $
  • abla \cdot{}$ in dimensions $2$ and $3$ respectively. For $n$-dimensional Euclidean space, we generally have $
  • abla (x/|x|^n) =
  • abla \cdot (x/|x|^n) = S_{n-1}\delta(x)$ where $S_n$ is the [surface area of the unit $n$-sphere](https://en.wikipedia.org/wiki/N-sphere#Closed_forms). This gives us a counterexample for any $n$. It's worth exploring this a bit more.
  • The [fundamental theorem of calculus](http://geocalc.clas.asu.edu/pdf/NFMPchapt2.pdf) for geometric algebra in flat Euclidean space can be written $$\int_{\mathcal M} \grave g (\mathrm d^m x\cdot \grave
  • abla) \grave f = \oint_{\partial\mathcal M} g \mathrm d^{m-1} x f$$ where $g$ and $f$ are arbitrary multivector functions, $\mathcal M$ is a $m$-dimensional submanifold of an $n$-dimensional Euclidean space and $\partial \mathcal M$ is it's boundary, $\mathrm d^m x$ is the *$m$-vector* differential (not a scalar!), i.e. the infinitesimal tangent element at $x$, and finally, the $\grave{}$ marks indicate that the vector derivative differentiates both $g$ and $f$. (The non-commutativity of geometric algebra leads to the vector derivative operating both to its left and to its right.)
  • Choosing $g=1$, $m=n$, and $f(x)=x/|x|^n$, we get $\int_\mathcal M \mathrm d^n x
  • abla f = \oint_{\partial \mathcal M} \mathrm d^{n-1} x f$, and I want to give an intuitive argument why this leads to $
  • abla f$ behaving like the Dirac delta function. Imagine $\mathcal M$ is a ball centered at the origin. Then on its boundary, $f$ will be a vector pointing normal to its surface and thus orthogonal to $\mathrm d^{n-1} x$. $\mathrm d^{n-1} x f$ is thus constant on the surface of any sphere centered at the origin and thus so is the boundary integral. If you work it out more carefully, you'll see that the result is actually independent of the radius of the $n$-sphere. On the other hand, consider a sector of a spherical shell, i.e. the difference of two [spherical sectors](https://en.wikipedia.org/wiki/Spherical_sector), centered at the origin. The sides of the sector will contribute nothing to the boundary integral because opposite sides will cancel out. The rounded ends will also cancel out, because they have opposite orientations but the integral is constant and independent of radius on them. At this point, we can imagine approximating arbitrary submanifolds not containing the origin with spherical shell sectors to show that they all have zero integral. For submanifolds containing the origin, we can imagine splitting the integral into an integral over a sufficiently small sphere around the origin and an integral over a submanifold not containing the origin. We thus see that the value of the integral is non-zero if and only if it is over a submanifold containing the origin. If you are familiar with complex analysis, the [residue theorem](https://en.wikipedia.org/wiki/Residue_theorem) should come to mind. And indeed, the above shows that we have residue theorems in higher dimensions.
  • We can generalize the fundamental theorem of calculus a bit and connect it to other areas of math. Let $\underline \omega$ be a linear function field on $k$-vectors. That is, $\underline \omega(x; K)$ is a non-linear function of the point $x$ and a linear function of the $k$-vector $K$. We can write a more general fundamental theorem of calculus as $$\int_{\mathcal M} \underline{\grave\omega}(x; \mathrm d^m x \cdot \grave{
  • abla}) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1}x)$$ Defining $\mathrm d\underline \omega(x; K) = \underline{\grave\omega}(x; K \cdot \grave{
  • abla})$ produces a linear function field on $(k+1)$-vectors. We can then write the fundamental theorem of calculus as $$\int_{\mathcal M} \mathrm d\underline \omega(x; \mathrm d^m x) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1} x)$$ When $\underline \omega$ is scalar-valued, it is called a **(differential) $k$-form**. We can also identify $\underline \omega$ with a $k$-vector via $\underline \omega(x; K) = K \cdot \omega(x)$ where $\omega(x) = \sum_{E \\in \\mathcal B_k} \underline \omega(x; E) E$ where $\mathcal B_k$ is the set of basis $k$-vectors. The operator $\mathrm d$ on differential forms is called the **exterior derivative**. We can easily show that $\mathrm d \mathrm d = 0$ as an operator equation.
  • Having introduced the notion of a differential form and the exterior derivative, we can introduce the notions of [a closed and an exact form](https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms). A differential form $\underline \alpha$ is called **closed** if $\mathrm d \underline \alpha = 0$. It is called **exact** if there is a differential form $\underline \beta$ such that $\underline \alpha = \mathrm d\underline \beta$. If $\underline \alpha$ is a $0$-form, i.e. just a scalar function, then $\mathrm d\underline \alpha =
  • abla\alpha$. If $\underline \alpha$ is a $1$-form, then $\mathrm d \underline \alpha =
  • abla \wedge \alpha$. With this we see that your scalar example can be formulated as: is a closed $1$-form always exact? (Note, an exact form is always closed.) You've shown that it isn't in general, but your counter-example was on a domain that is not simply connected. Are there counter-examples on simply connected domains? The [Poincaré lemma](https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms#Poincar%C3%A9_lemma) shows that there isn't. In fact, it shows further that all closed $k$-forms are exact on contractible submanifolds of $\mathbb R^n$. r~~'s counter-example is on a space that is simply-connected but not contractible. In general, we don't have $\mathrm d \underline \alpha =
  • abla \wedge \alpha$, so the Poincaré lemma might not be relevant to r~~'s counter-example. However, if $\underline \alpha$ is an $(n-1)$-form on $\mathbb R^n$, then $\mathrm d \underline \alpha = I(
  • abla \wedge \alpha)$. Going to more complex manifolds leads to the world of [de Rham cohomology](https://en.wikipedia.org/wiki/De_Rham_cohomology).
  • **tl;dr** We can formulate your question more nicely with geometric algebra. As r~~ mentioned in a comment, there is a counter-example to your question on a simply-connected but not 2-connected domain. The counter-examples provided are Green's functions for the vector derivative. Formulating the problem in terms of differential forms, the Poincaré lemma shows that we will be able to find a potential whenever the domain is a contractible submanifold of $\mathbb R^n$. Going beyond this leads to de Rham cohomology.
  • First, let's get rid of the abomination known as the cross product by moving to [geometric algebra](http://geocalc.clas.asu.edu/html/Intro.html). See the link, but as a very brief introduction in addition to the **dot product** or **inner product** of vectors $u \cdot v$, we'll add the **wedge/outer/exterior product** $u \wedge v$. This behaves like a product (i.e. it's bilinear and associative) except that it is anti-commutative, i.e. $u \wedge v = -v \wedge u$. Much like the inner product of vectors isn't a vector, the outer product of two vectors gives a directed area element known as a **bivector**. You can visualize this as a parallelogram in the plane spanned by the vectors that's oriented clockwise or counter-clockwise based on the direction of the vectors. We can wedge a bivector together with a vector to get a **trivector**, i.e. an oriented volume element, and so forth. In general, we can talk about **$k$-blades** which are wedge products of $k$ vectors, and $k$-vectors which are linear combinations of $k$-blades. (Something special about dimension $3$ and lower is that all $k$-vectors are $k$-blades in these dimensions.) **Multivectors** are linear combinations of $k$-vectors for various values of $k$. In an $n$-dimensional space, there is a unique (up to scaling) $n$-vector/blade known as the **pseudoscalar**. $I$ will be used generically to represent a unit pseudoscalar.
  • We can then define the **geometric product**, $uv$ of two vectors as $uv = u \cdot v + u \wedge v$. If you fix $u$ and think about the situation geometrically, you should be able to convince yourself that (as long as $u$ is non-zero), $v$ is uniquely defined, and that this is true in any dimension. Algebraically, this corresponds to the existence of an inverse $v^{-1} = v/v^2$. Via bilinearity, we can lift the geometric product into a product of multivectors in general, though we don't have inverses to arbitrary non-zero multivectors. The pseudoscalar induces a duality where if $A$ is a $k$-vector, then $AI$ is an $(n-k)$-vector. We sometimes talk of "pseudovectors" which are the $(n-1)$-vectors which we can identify via this duality with vectors. This is the basis of the cross product. The cross product is the dual of the wedge product, e.g. $u \times v = (u \wedge v)I$. However, this expression only produces a vector when $n=3$.
  • To do calculus, we can introduce the **vector derivative** $
  • abla$ which behaves algebraically like a vector in geometric algebra. The quickest way to define it for someone familiar with traditional multivariable calculus is to introduce an orthonormal basis $\mathbf e_1, \dots, \mathbf e_n$ for our $n$-dimensional Euclidean space and define $
  • abla = \sum_{i=1}^n \mathbf e_i \partial_i$ where $\partial_i$ is the partial derivative in the $\mathbf e_i$ direction. Given a vector field $F$, we have $
  • abla F =
  • abla \cdot F +
  • abla \wedge F$. $
  • abla \cdot F$ is the divergence as usual while we'll call $
  • abla \wedge F$ the curl. As we saw above, when $n=3$ this definition is dual to the definition in terms of the cross product. The gradient is just the vector derivative applied to a scalar field. We can apply the vector derivative to arbitrary multivector fields.
  • So far all of this is just an intro to geometric calculus. That said, I'm doing this not only because the cross product is ugly and geometric algebra/calculus is generally useful, but if you explore your question further you will quickly encounter differential forms which operate in the [exterior algebra](https://en.wikipedia.org/wiki/Exterior_algebra). In most cases, this exterior algebra can be straightforwardly identified with the exterior algebra part of a geometric algebra. This geometric algebra perspective is usually more concrete and geometric than typical presentations of exterior algebra, and is more algebraically convenient. The geometric algebra perspective also allows closer/clearer connections to complex analysis.
  • Turning, finally, to your question, there's a bit of irregularity that geometric algebra can clear up. Let's revisit the scalar field case. Let $ ho$ be a scalar field. Then $
  • abla ho =
  • abla \wedge ho$ is a vector field and $
  • abla \wedge (
  • abla \wedge ho) = (
  • abla \wedge
  • abla) \wedge ho = 0$. Indeed, we have $
  • abla \wedge
  • abla = 0$ generally. Using the general geometric algebraic identity $a \cdot (BI) = (a \wedge B)I$, we see that this is the same formula as $
  • abla \times (
  • abla ho) = 0$ but no longer restricted to $n=3$. For this case, your question would be: given a vector field $v$ such that $
  • abla \wedge v = 0$, is there a scalar field $ ho$ such that $v =
  • abla ho$? Your counterexample can be written, for $n=2$, as $v(x) = xI/x^2 = x^{-1}I$. Moving to your actual question, we see that $
  • abla \cdot (
  • abla \times F) = 0$ is equivalent to $
  • abla \wedge
  • abla \wedge F = 0$. Further, choosing $HI=F$, your question becomes: given an $H$ such that $
  • abla \wedge H = 0$, is there a $G$ such that $H =
  • abla \wedge G$? Here $H$ is a *bivector* field. Unsurprisingly, as r~~ mentions in a comment referencing *Counterexamples in Analysis* by Gelbaum and Olsted, this also has a counterexample and on a simply connected domain for $n=3$ which we can write as $F(x) = x/|x|^3$ or, in a sense of deja vu, $H(x) = xI/|x|^3$. It is worth noting that while this example is on a simply connected (i.e. 1-connected) domain, the domain isn't [2-connected](https://en.wikipedia.org/wiki/N-connected_space).
  • What's happening with these counterexamples is $
  • abla \wedge v = 0$ and $
  • abla \wedge H = 0$ are equivalent to $
  • abla \cdot v' = 0$ and $
  • abla \cdot F = 0$ where $v'I = v$. The particular examples given are the (proportional to) [Green's functions](https://en.wikipedia.org/wiki/Green's_function) for the operators $
  • abla$ and $
  • abla \cdot{}$ in dimensions $2$ and $3$ respectively. For $n$-dimensional Euclidean space, we generally have $
  • abla (x/|x|^n) =
  • abla \cdot (x/|x|^n) = S_{n-1}\delta(x)$ where $S_n$ is the [surface area of the unit $n$-sphere](https://en.wikipedia.org/wiki/N-sphere#Closed_forms). This gives us a counterexample for any $n$. It's worth exploring this a bit more.
  • The [fundamental theorem of calculus](http://geocalc.clas.asu.edu/pdf/NFMPchapt2.pdf) for geometric algebra in flat Euclidean space can be written $$\int_{\mathcal M} \grave g (\mathrm d^m x\cdot \grave
  • abla) \grave f = \oint_{\partial\mathcal M} g \mathrm d^{m-1} x f$$ where $g$ and $f$ are arbitrary multivector functions, $\mathcal M$ is a $m$-dimensional submanifold of an $n$-dimensional Euclidean space and $\partial \mathcal M$ is its boundary, $\mathrm d^m x$ is the *$m$-vector* differential (not a scalar!), i.e. the infinitesimal tangent element at $x$, and finally, the $\grave{}$ marks indicate that the vector derivative differentiates both $g$ and $f$. (The non-commutativity of geometric algebra leads to the vector derivative operating both to its left and to its right.)
  • Choosing $g=1$, $m=n$, and $f(x)=x/|x|^n$, we get $\int_\mathcal M \mathrm d^n x
  • abla f = \oint_{\partial \mathcal M} \mathrm d^{n-1} x f$, and I want to give an intuitive argument why this leads to $
  • abla f$ behaving like the Dirac delta function. Imagine $\mathcal M$ is a ball centered at the origin. Then, on its boundary, $f$ will be a vector pointing normal to its surface and thus orthogonal to $\mathrm d^{n-1} x$. $\mathrm d^{n-1} x f$ is thus constant on the surface of any sphere centered at the origin and thus so is the boundary integral. If you work it out more carefully, you'll see that the value is actually independent of the radius of the $n$-sphere. On the other hand, consider a sector of a spherical shell, i.e. the difference of two [spherical sectors](https://en.wikipedia.org/wiki/Spherical_sector), centered at the origin. The sides of the sector will contribute nothing to the boundary integral because opposite sides will cancel out. The rounded ends will also cancel out, because they have opposite orientations but the integral is constant and independent of radius on them. At this point, we can imagine approximating arbitrary submanifolds not containing the origin with spherical shell sectors to show that they all have zero integral. For submanifolds containing the origin, we can imagine splitting the integral into an integral over a sufficiently small sphere around the origin and an integral over a submanifold not containing the origin. We thus see that the value of the integral is non-zero if and only if it is over a submanifold containing the origin. If you are familiar with complex analysis, the [residue theorem](https://en.wikipedia.org/wiki/Residue_theorem) should come to mind. And, indeed, the above shows that we have residue theorems in higher dimensions.
  • We can generalize the fundamental theorem of calculus a bit and connect it to other areas of math. Let $\underline \omega$ be a linear function field on $k$-vectors. That is, $\underline \omega(x; K)$ is a non-linear function of the point $x$ and a linear function of the $k$-vector $K$. We can write a more general fundamental theorem of calculus as $$\int_{\mathcal M} \underline{\grave\omega}(x; \mathrm d^m x \cdot \grave{
  • abla}) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1}x)$$ Defining $\mathrm d\underline \omega(x; K) = \underline{\grave\omega}(x; K \cdot \grave{
  • abla})$ produces a linear function field on $(k+1)$-vectors. We can then write the fundamental theorem of calculus as $$\int_{\mathcal M} \mathrm d\underline \omega(x; \mathrm d^m x) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1} x)$$ When $\underline \omega$ is scalar-valued, it is called a **(differential) $k$-form**. We can also identify $\underline \omega$ with a $k$-vector via $\underline \omega(x; K) = K \cdot \omega(x)$ where $\omega(x) = \sum_{E \\in \\mathcal B_k} \underline \omega(x; E) E$ where $\mathcal B_k$ is the set of orthonormal basis $k$-vectors. The operator $\mathrm d$ on differential forms is called the **exterior derivative**. We can easily show that $\mathrm d \mathrm d = 0$ as an operator equation.
  • Having introduced the notion of a differential form and the exterior derivative, we can introduce the notions of [a closed and an exact form](https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms). A differential form $\underline \alpha$ is called **closed** if $\mathrm d \underline \alpha = 0$. It is called **exact** if there is a differential form $\underline \beta$ such that $\underline \alpha = \mathrm d\underline \beta$. If $\underline \alpha$ is a $0$-form, i.e. just a scalar function, then $\mathrm d\underline \alpha =
  • abla\alpha$. If $\underline \alpha$ is a $1$-form, then $\mathrm d \underline \alpha =
  • abla \wedge \alpha$. With this we see that your scalar example can be formulated as: is a closed $1$-form always exact? (Note, an exact form is always closed.) You've shown that it isn't in general, but your counter-example was on a domain that is not simply connected. Are there counter-examples on (non-empty) simply connected [domains](https://en.wikipedia.org/wiki/Domain_(mathematical_analysis)) in the plane? The [Poincaré lemma](https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms#Poincar%C3%A9_lemma) shows that there [isn't](https://en.wikipedia.org/wiki/Riemann_mapping_theorem). In fact, it shows further that all closed $k$-forms are exact on contractible submanifolds of $\mathbb R^n$. r~~'s counter-example is on a space that is simply-connected but not contractible. In general, we don't have $\mathrm d \underline \alpha =
  • abla \wedge \alpha$, so the Poincaré lemma might not be relevant to r~~'s counter-example. However, if $\underline \alpha$ is an $(n-1)$-form on $\mathbb R^n$, then $\mathrm d \underline \alpha = I(
  • abla \wedge \alpha)$. Going to more complex manifolds leads to the world of [de Rham cohomology](https://en.wikipedia.org/wiki/De_Rham_cohomology).
#1: Initial revision by user avatar Derek Elkins‭ · 2021-02-20T03:11:55Z (about 3 years ago)
**tl;dr** We can formulate your question more nicely with geometric algebra. As r~~ mentioned in a comment, there is a counter-example to your question on a simply-connected but not 2-connected domain. The counter-examples provided are Green's functions for the vector derivative. Formulating the problem in terms of differential forms, the Poincaré lemma shows that we will be able to find a potential whenever the domain is a contractible submanifold of $\mathbb R^n$. Going beyond this leads to de Rham cohomology.

First, let's get rid of the abomination known as the cross product by moving to [geometric algebra](http://geocalc.clas.asu.edu/html/Intro.html). See the link, but as a very brief introduction in addition to the **dot product** or **inner product** of vectors $u \cdot v$, we'll add the **wedge/outer/exterior product** $u \wedge v$. This behaves like a product (i.e. it's bilinear and associative) except that it is anti-commutative, i.e. $u \wedge v = -v \wedge u$. Much like the inner product of vectors isn't a vector, the outer product of two vectors gives a directed area element known as a **bivector**. You can visualize this as a parallelogram in the plane spanned by the vectors that's oriented clockwise or counter-clockwise based on the direction of the vectors. We can wedge a bivector together with a vector to get a **trivector**, i.e. an oriented volume element, and so forth. In general, we can talk about **$k$-blades** which are wedge products of $k$ vectors, and $k$-vectors which are linear combinations of $k$-blades. (Something special about dimension $3$ and lower is that all $k$-vectors are $k$-blades in these dimensions.) **Multivectors** are linear combinations of $k$-vectors for various values of $k$. In an $n$-dimensional space, there is a unique (up to scaling) $n$-vector/blade known as the **pseudoscalar**. $I$ will be used generically to represent a unit pseudoscalar.

We can then define the **geometric product**, $uv$ of two vectors as $uv = u \cdot v + u \wedge v$. If you fix $u$ and think about the situation geometrically, you should be able to convince yourself that (as long as $u$ is non-zero), $v$ is uniquely defined, and that this is true in any dimension. Algebraically, this corresponds to the existence of an inverse $v^{-1} = v/v^2$. Via bilinearity, we can lift the geometric product into a product of multivectors in general, though we don't have inverses to arbitrary non-zero multivectors. The pseudoscalar induces a duality where if $A$ is a $k$-vector, then $AI$ is an $(n-k)$-vector. We sometimes talk of "pseudovectors" which are the $(n-1)$-vectors which we can identify via this duality with vectors. This is the basis of the cross product. The cross product is the dual of the wedge product, e.g. $u \times v = (u \wedge v)I$. However, this expression only produces a vector when $n=3$.

To do calculus, we can introduce the **vector derivative** $\nabla$ which behaves algebraically like a vector in geometric algebra. The quickest way to define it for someone familiar with traditional multivariable calculus is to introduce an orthonormal basis $\mathbf e_1, \dots, \mathbf e_n$ for our $n$-dimensional Euclidean space and define $\nabla = \sum_{i=1}^n \mathbf e_i \partial_i$ where $\partial_i$ is the partial derivative in the $\mathbf e_i$ direction. (We'll return to this.) Given a vector field $F$, we have $\nabla F = \nabla \cdot F + \nabla \wedge F$. $\nabla \cdot F$ is the divergence as usual while we'll call $\nabla \wedge F$ the curl. As we saw above, when $n=3$ this definition is dual to the definition in terms of the cross product. The gradient is just the vector derivative applied to a scalar field. We can apply the vector derivative to arbitrary multivector fields.

So far all of this is just an intro to geometric calculus. That said, I'm doing this not only because the cross product is ugly and geometric algebra/calculus is generally useful, but if you explore your question further you will quickly encounter differential forms which operate in the [exterior algebra](https://en.wikipedia.org/wiki/Exterior_algebra). In most cases, this exterior algebra can be straightforwardly identified with the exterior algebra part of a geometric algebra. This geometric algebra perspective is usually more concrete and geometric than typical presentations of exterior algebra, and is more algebraically convenient. The geometric algebra perspective also allows closer/clearer connections to complex analysis.

Turning, finally, to your question, there's a bit of incoherence that geometric algebra can clear up. Let's revisit the scalar field case. Let $\rho$ be a scalar field. Then $\nabla\rho = \nabla \wedge \rho$ is a vector field and $\nabla \wedge (\nabla \wedge \rho) = (\nabla \wedge \nabla) \wedge \rho = 0$. Indeed, we have $\nabla \wedge \nabla = 0$ generally. Using the general geometric algebraic identity $a \cdot (BI) = (a \wedge B)I$, we see that this is the same formula as $\nabla \times (\nabla \rho) = 0$ but no longer restricted to $n=3$. For this case, your question would be: given a vector field $v$ such that $\nabla \wedge v = 0$, is there a scalar field $\rho$ such that $v = \nabla \rho$? Your counterexample can be written, for $n=2$, as $v(x) = xI/x^2 = x^{-1}I$. Moving to your actual question, we see that $\nabla \cdot (\nabla \times F) = 0$ is equivalent to $\nabla \wedge \nabla \wedge F = 0$. Further, choosing $HI=F$, your question becomes: given an $H$ such that $\nabla \wedge H = 0$, is there a $G$ such that $H = \nabla \wedge G$? Here $H$ is a *bivector* field. Unsurprisingly, as r~~ mentions in a comment referencing *Counterexamples in Analysis* by Gelbaum and Olsted, this also has a counterexample and on a simply connected domain for $n=3$ which we can write as $F(x) = x/|x|^3$ or, in a sense of deja vu, $H(x) = xI/|x|^3$. It is worth noting that while this example is on a simply connected (i.e. 1-connected) domain, the domain isn't [2-connected](https://en.wikipedia.org/wiki/N-connected_space).

What's happening with these counterexamples, is $\nabla \wedge v = 0$ and $\nabla \wedge H = 0$ are equivalent to $\nabla \cdot v' = 0$ and $\nabla \cdot F = 0$ where $v'I = v$. The particular examples given are the (proportional to) [Green's functions](https://en.wikipedia.org/wiki/Green's_function) for the operators $\nabla$ and $\nabla \cdot{}$ in dimensions $2$ and $3$ respectively. For $n$-dimensional Euclidean space, we generally have $\nabla (x/|x|^n) = \nabla \cdot (x/|x|^n) = S_{n-1}\delta(x)$ where $S_n$ is the [surface area of the unit $n$-sphere](https://en.wikipedia.org/wiki/N-sphere#Closed_forms). This gives us a counterexample for any $n$. It's worth exploring this a bit more.

The [fundamental theorem of calculus](http://geocalc.clas.asu.edu/pdf/NFMPchapt2.pdf) for geometric algebra in flat Euclidean space can be written $$\int_{\mathcal M} \grave g (\mathrm d^m x\cdot \grave\nabla) \grave f = \oint_{\partial\mathcal M} g \mathrm d^{m-1} x f$$ where $g$ and $f$ are arbitrary multivector functions, $\mathcal M$ is a $m$-dimensional submanifold of an $n$-dimensional Euclidean space and $\partial \mathcal M$ is it's boundary, $\mathrm d^m x$ is the *$m$-vector* differential (not a scalar!), i.e. the infinitesimal tangent element at $x$, and finally, the $\grave{}$ marks indicate that the vector derivative differentiates both $g$ and $f$. (The non-commutativity of geometric algebra leads to the vector derivative operating both to its left and to its right.)

Choosing $g=1$, $m=n$, and $f(x)=x/|x|^n$, we get $\int_\mathcal M \mathrm d^n x\nabla f = \oint_{\partial \mathcal M} \mathrm d^{n-1} x f$, and I want to give an intuitive argument why this leads to $\nabla f$ behaving like the Dirac delta function. Imagine $\mathcal M$ is a ball centered at the origin. Then on its boundary, $f$ will be a vector pointing normal to its surface and thus orthogonal to $\mathrm d^{n-1} x$. $\mathrm d^{n-1} x f$ is thus constant on the surface of any sphere centered at the origin and thus so is the boundary integral. If you work it out more carefully, you'll see that the result is actually independent of the radius of the $n$-sphere. On the other hand, consider a sector of a spherical shell, i.e. the difference of two [spherical sectors](https://en.wikipedia.org/wiki/Spherical_sector), centered at the origin. The sides of the sector will contribute nothing to the boundary integral because opposite sides will cancel out. The rounded ends will also cancel out, because they have opposite orientations but the integral is constant and independent of radius on them. At this point, we can imagine approximating arbitrary submanifolds not containing the origin with spherical shell sectors to show that they all have zero integral. For submanifolds containing the origin, we can imagine splitting the integral into an integral over a sufficiently small sphere around the origin and an integral over a submanifold not containing the origin. We thus see that the value of the integral is non-zero if and only if it is over a submanifold containing the origin. If you are familiar with complex analysis, the [residue theorem](https://en.wikipedia.org/wiki/Residue_theorem) should come to mind. And indeed, the above shows that we have residue theorems in higher dimensions.

We can generalize the fundamental theorem of calculus a bit and connect it to other areas of math. Let $\underline \omega$ be a linear function field on $k$-vectors. That is, $\underline \omega(x; K)$ is a non-linear function of the point $x$ and a linear function of the $k$-vector $K$. We can write a more general fundamental theorem of calculus as $$\int_{\mathcal M} \underline{\grave\omega}(x; \mathrm d^m x \cdot \grave{\nabla}) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1}x)$$ Defining $\mathrm d\underline \omega(x; K) = \underline{\grave\omega}(x; K \cdot \grave{\nabla})$ produces a linear function field on $(k+1)$-vectors. We can then write the fundamental theorem of calculus as $$\int_{\mathcal M} \mathrm d\underline \omega(x; \mathrm d^m x) = \oint_{\partial \mathcal M} \underline \omega(x; \mathrm d^{m-1} x)$$ When $\underline \omega$ is scalar-valued, it is called a **(differential) $k$-form**. We can also identify $\underline \omega$ with a $k$-vector via $\underline \omega(x; K) = K \cdot \omega(x)$ where $\omega(x) = \sum_{E \\in \\mathcal B_k} \underline \omega(x; E) E$ where $\mathcal B_k$ is the set of basis $k$-vectors. The operator $\mathrm d$ on differential forms is called the **exterior derivative**. We can easily show that $\mathrm d \mathrm d = 0$ as an operator equation.

Having introduced the notion of a differential form and the exterior derivative, we can introduce the notions of [a closed and an exact form](https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms). A differential form $\underline \alpha$ is called **closed** if $\mathrm d \underline \alpha = 0$. It is called **exact** if there is a differential form $\underline \beta$ such that $\underline \alpha = \mathrm d\underline \beta$. If $\underline \alpha$ is a $0$-form, i.e. just a scalar function, then $\mathrm d\underline \alpha = \nabla\alpha$. If $\underline \alpha$ is a $1$-form, then $\mathrm d \underline \alpha = \nabla \wedge \alpha$. With this we see that your scalar example can be formulated as: is a closed $1$-form always exact? (Note, an exact form is always closed.) You've shown that it isn't in general, but your counter-example was on a domain that is not simply connected. Are there counter-examples on simply connected domains? The [Poincaré lemma](https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms#Poincar%C3%A9_lemma) shows that there isn't. In fact, it shows further that all closed $k$-forms are exact on contractible submanifolds of $\mathbb R^n$. r~~'s counter-example is on a space that is simply-connected but not contractible. In general, we don't have $\mathrm d \underline \alpha = \nabla \wedge \alpha$, so the Poincaré lemma might not be relevant to r~~'s counter-example. However, if $\underline \alpha$ is an $(n-1)$-form on $\mathbb R^n$, then $\mathrm d \underline \alpha = I(\nabla \wedge \alpha)$.  Going to more complex manifolds leads to the world of [de Rham cohomology](https://en.wikipedia.org/wiki/De_Rham_cohomology).