Konkrete Analysis

First I want to share some solutions for the exercises of a Maths book. The book is called Konkrete Analysis by Folkmar Bornemann and is written in German as are my solutions for some of the exercises. (I think I cover about 70% of all exercises in the book and pretty much every easy one :-))

I don’t claim that my solutions are correct and there are probably quite a few (uncorrected) mistakes, but right now I haven’t been able to find any other openly available solutions. Although the book claims to be easily readable without attending the lectures of Professor Bornemann, I doubt that it is possible to do so successfully without being able to do the exercises and while doing so being able to get help or look at possible solutions to find new ideas.

I did all the exercises as way to prepare myself for the exam (it was very successful alas probably not in the most time efficient way), so it also contains solutions for old exams (but those are appended at the end and probably not interesting at all).

You can download the PDF konkrete-analysis-solutions here.

Cauchy-Schwarz and a Problem

The book is quite useful though as was the lecture “Analysis for Computer Scientists” which was based on the book (or vice-versa) and it is probably the single one lecture that has taught most since I started attending Computer Science at university.

One of the many topics (we covered a lot which was one of the nice things) was inequalities and the useful things you can do with them. Especially the Cauchy-Schwarz inequality turns out to be quite useful while being pretty basic:

\(< a,b > \leq ||a|| \cdot ||b||\) with \(< a , b > = || a || \cdot || b ||\) when \(\exists \lambda: a = \lambda b\)

Interestingly enough this can already yield results that are quite difficult to obtain otherwise. For example we can easily prove some lower or upper bounds by applying it. That is, we can show the existence of a maximum or minimum of some functions.

To do this, we first obtain a lower or upper bound (for the right or the left side respectively) and then we can use the equality case to prove the existence of the a minium or maximum.

For example, let’s take a look at the following problem:

We have \[ x \in \mathbb{R}^n, 0 < x, \sum_{i=1}^n {x_i} = 1 \] and want to minimize \[ \sum_{i=1}^n { \frac 1 x } \]. We simply set \[ a_i := \sqrt{x_i}, b_i := \sqrt{ \frac 1 x_i } \].

With Cauchy-Schwarz we then get:

\[ \sum_{i=1}^n { a_i b_i } = \sum_{i=1}^n { \sqrt{ x_i } \frac 1 {\sqrt{ x_i }} } = \sum_{i=1}^n 1 = n \] \[ \leq \sqrt{ \sum_{i=1}^n \sqrt{x_i}^2 } \sqrt{ \sum_{i=1}^n \frac 1 {\sqrt{x_i}^2} } = \sqrt { \sum_{i=1}^n x_i } \sqrt{\sum_{i=1}^n \frac 1 x_i } = 1 \cdot \sqrt{\sum_{i=1}^n \frac 1 x_i } \]

That is: \[ \sum_{i=1}^n \frac 1 x_i \geq n^2 \].

Now we know an lower bound for the reciprocal sum. To determine the minimal x vector, we remember to start with:

\[ a = \lambda b \Leftrightarrow \sqrt x_i = \lambda \frac 1 x_i \Leftrightarrow x_i = \lambda \]

We use that with the constraint: \[ 1 = \sum_{i=1}^n x_i = \sum_{i=1}^n \lambda = n \cdot \lambda \], resulting in: \[ x_i = \lambda = \frac 1 n \].

Generalization

The nice thing is you can go and generalize this finding to more interesting minimization problems as there also exist more advanced versions of the Cauchy-Schwarz inequality (actually it’s Hölder’s inequality) adding degrees of freedom to play around with (just one DoF in my case).

Thus it is possible to solve the following class of problems using a slightly more advanced Cauchy-Schwarz inequality:

\[ x,q \in \mathbb{R}^n; 0 < x, q\] \[0 < \alpha, \gamma < \infty; \beta > 0 \] \[\large\sum_{i=1}^n {x_i}^\gamma = \beta \] \[\min \sum_{i=1}^n \frac {q_i} {x_i^\alpha} \]

by utilizing

\[ \large \sum_{i=1}^n { a_i b_i } \le \sqrt[p]{ \sum_{i=1}^n {a_i^p}} \sqrt[{p}']{ \sum_{i=1}^n {b_i^{p'}}} \] with \[ \frac 1 p + \frac 1 {p'} = 1 \]

Equality requires the same linear dependence condition as the normal Cauchy-Schwarz inequation.

I’ve only deduced the solution for \(\gamma = 1\) but it shouldn’t be difficult to adapt it once you get the idea how it works (it’s fairly straight forward).

You can find the whole deduction in the PDF “Minimum of a Generalized Reciprocal Sum”.

For completeness’ sake here are the results for an optimal solution \(x^{*}\):

\[\large x_i^{*} = \beta \cdot \frac { \sqrt[1 + \alpha]{q_i} }{\sum_{k=1}^n \sqrt[1 + \alpha]{q_k}} \]

\[\large \frac { \left ( \sum_{i=1}^n \sqrt[{1 + \alpha}]{q_i} \right )^{1 + \alpha}} {\beta^\alpha}=\sum_{i=1}^n {\frac {q_i} { {x_i^{*} }^\alpha} }\]

Epilog

Coming up with the solution was actually a lot of fun and quite interesting because I didn’t know it was possible to deduce it this way nor did I know how to do it before. If you read the PDF, I think it’s quite nice how you can play around with the formulas and come up with new things from the Cauchy-Schwarz inequality.

If you are only interested in \(x ^ {*}\), you can shortcut everything by immediately skipping to the equality case once \(p\) and \(p'\) have been determined and solve for that. I mainly wrote it all out because I wanted to make sure that it actually was correct.

If it wasn’t for my Analysis lecture last term, I probably wouldn’t have found this.