## Harmonic Analysis: The Hilbert TransformSeptember 30, 2010

Posted by Sarah in Uncategorized.
Tags: ,

I’ve really been enjoying my harmonic analysis class and I thought I’d write up some recent notes.

The Poisson kernel, $P_r(\theta)$, is defined as $\sum r^|k| e^{i k \theta} = 1 + 2 Re(\frac{z}{1-z})$ $= Re(1 + 2 \frac{z}{1-z}) = Re(\frac{1+z}{1-z}.$
This is the real part of a holomorphic function. Now, what are the properties of that holomorphic function?
We define the Herglotz kernel as $f * H_r(\theta) = f * P_r + i f * q_r$
where $q_r(\theta) = Im(\frac{1+z}{1-z})$.
As $r \to 1, f * P_r(\theta) \to f$ (this is why the Poisson kernel solves the Dirichlet problem in the disc.) Similarly, as $r \to 1, i f * q_r(\theta) \to \tilde{f}$, some unknown function. This is the conjugate function of f, and convolution with $q_r$ is known as the Hilbert transform.

Now, we can write $q_r(\theta)$ explicitly as $i q_r = \sum sgn k r^|k| e^{i k \theta}$
similar to the Poisson kernel, except for the sign function. Summing this series, $q_r(\theta) = \frac{i 2 r \sin \theta}{1 - 2 r cos \theta + r^2}$ while $P_r(\theta) = \frac{1 - r^2}{1 - 2 r cos \theta + r^2}.$
This conjugate function of $f$ need not be bounded, even if $f$ is. In other words, the Hilbert transform is not bounded in the $L^\infty$ norm. It’s not bounded in the $L^1$ norm either. But it’s clearly bounded in the $L^2$ norm.

The Hilbert transform can be shown to be bounded for all $L^p, 1 < p < 2$, by first showing that it satisfies a weak-type inequality and then showing that all linear operators satisfying a weak-type inequality are bounded in $L^p, 1 < p < 2$ (this is called the Marcinkiewicz Interpolation Theorem.) I might type that up another time.

If we let $F = f + i \tilde{f}$, then $F$ is sort of an envelope for $f$, larger in absolute value and smoother, and having the property that it stretches and shrinks with the function.

## Current reading: neuroscienceSeptember 30, 2010

Posted by Sarah in Uncategorized.
Tags: ,

Right now my “interests” are supposed to be limited to passing quals. Fair enough, but I’m also persistently trying to get a sense of what math can tell us about how humans think and how to model it. Except that I don’t actually know any neuroscience. So I’ve been remedying that.

Here’s one overview paper that goes over the state of the field, in terms of brain architecture and hierarchical organization. Neurons literally form circuits, and, in rough outline, we know where those circuits are. We can look at the responses of those circuits in vivo to observe the ways in which the brain clusters and organizes content: even to the point of constructing a proto-grammar based on a tree of responses to different sentences. I hadn’t realized that so much was known already — the brain is mysterious, of course, but it’s less mysterious than I had imagined.

Then here’s an overview paper by Yale’s Steve Zucker about image detection using differential geometry. In his model, detection of edges and textures is based on the tangent bundle. Apparently, unlike some approaches in computational vision, this differential geometry approach has neurological correlates in the structure of the connections in the visual cortex. The visual cortex is arranged in a set of columns; the hypothesis is that these represent $\mathbb{R} \times S^1$, with the column representing position and the slices at different heights of the columns representing orientation.

## What is a quantum vector space?September 24, 2010

Posted by Sarah in Uncategorized.
Tags:

I went to the Friday grad seminar — this one by Hyun Kyu on the quantum Teichmuller space. (Here’s his paper, which I haven’t read as of now.) I thought it might be helpful to learn some background about this whole “quantum” business.

One way of thinking about the ordinary plane is to consider it the algebra freely generated by the elements x and y subject to the commutation relationship yx = xy. Now, what if we alter this description to instead have yx = qxy? This defines something known as the quantum plane. Here, q is an element of the ground field. Obviously, except when q = 1, this is a non-commutative algebra. For any pair of integers i and j, we have $y^i x^i = q^{ij} x^i y^j.$
We can define quantum versions of lots of things — for example, $SL_q(2, \mathbb{C})$ is the group of 2-by-2 matrices with determinant 1, satisfying the relations

ab = 2ba, bc = cb, cd = q dc, ac = q ca, bd = q db, and ad-da = (q – 1/q)bc.

The “quantum” here is in the mathematical sense of a non-commutative deformation of a commutative algebra.
More on the subject: “What is a Quantum Group?”

## See Your GroupSeptember 23, 2010

Posted by Sarah in Uncategorized.
Tags:

Because I am a geek, and because I love my hometown, I had a T-shirt made.  The inspiration, of course, is the storied Valois Cafeteria in Hyde Park, where you can “see your food.” It’s a local institution, known for comfort food and more recently famous as an Obama favorite.

## Johnson-Lindenstrauss and RIPSeptember 10, 2010

Posted by Sarah in Uncategorized.
Tags: ,

Via Nuit Blanche, I found a paper relating the Johnson-Lindenstrauss Theorem to the Restricted Isometry Property (RIP.) It caught my eye because the authors are Rachel Ward and Felix Krahmer, whom I actually know! I met Rachel when I was an undergrad and she was a grad student. We taught high school girls together and climbed mountains in Park City, where I also met Felix.

The Johnson-Lindenstrauss Lemma says that any p points in n-dimensional Euclidean space can be embedded in $O(\epsilon^{-2} \log(p))$ dimensions, without distorting the distance between any two points by a distance of more than a factor between $(1 -\epsilon)$ and $(1 + \epsilon)$. So it gives us almost-isometric embeddings of high-dimensional data in lower dimensions.

The Restricted Isometry Property, familiar to students of compressed sensing, is the property of a matrix $\Phi$ that $(1-\delta) ||x||_2^2 \le ||\Phi x||_2^2 \le (1 + \delta) ||x||_2^2$
for all sufficiently sparse vectors $x$.
The relevance of this property is that these are the matrices for which $\ell_1$-minimization actually yields the sparsest vector — that is, RIP is a sufficient condition for basis pursuit to work.

Now these two concepts involve equations that look a lot alike… and it turns out that they actually are related. The authors show that RIP matrices produce Johnson-Lindenstrauss embeddings. In particular, they produce improved Johnson-Lindenstrauss bounds for special types of matrices with known RIP bounds.

The proof relies upon Rademacher sequences (uniformly distributed in [-1, 1]), which I don’t know a lot about, but should probably learn.

[N.B.: corrections from the original post come from Felix.]

## ConcretenessSeptember 5, 2010

Posted by Sarah in Uncategorized.
Tags: