How to Edit The Unit 4 L 1 Math 8 with ease Online
Start on editing, signing and sharing your Unit 4 L 1 Math 8 online refering to these easy steps:
- click the Get Form or Get Form Now button on the current page to access the PDF editor.
- hold on a second before the Unit 4 L 1 Math 8 is loaded
- Use the tools in the top toolbar to edit the file, and the edited content will be saved automatically
- Download your modified file.
A top-rated Tool to Edit and Sign the Unit 4 L 1 Math 8


A clear tutorial on editing Unit 4 L 1 Math 8 Online
It has become very simple lately to edit your PDF files online, and CocoDoc is the best online PDF editor you have ever used to have some editing to your file and save it. Follow our simple tutorial to start!
- Click the Get Form or Get Form Now button on the current page to start modifying your PDF
- Add, modify or erase your content using the editing tools on the top toolbar.
- Affter editing your content, put on the date and make a signature to make a perfect completion.
- Go over it agian your form before you save and download it
How to add a signature on your Unit 4 L 1 Math 8
Though most people are in the habit of signing paper documents by writing, electronic signatures are becoming more regular, follow these steps to add a signature!
- Click the Get Form or Get Form Now button to begin editing on Unit 4 L 1 Math 8 in CocoDoc PDF editor.
- Click on the Sign icon in the tool menu on the top
- A box will pop up, click Add new signature button and you'll have three choices—Type, Draw, and Upload. Once you're done, click the Save button.
- Move and settle the signature inside your PDF file
How to add a textbox on your Unit 4 L 1 Math 8
If you have the need to add a text box on your PDF and create your special content, follow the guide to complete it.
- Open the PDF file in CocoDoc PDF editor.
- Click Text Box on the top toolbar and move your mouse to carry it wherever you want to put it.
- Fill in the content you need to insert. After you’ve inserted the text, you can actively use the text editing tools to resize, color or bold the text.
- When you're done, click OK to save it. If you’re not settle for the text, click on the trash can icon to delete it and begin over.
An easy guide to Edit Your Unit 4 L 1 Math 8 on G Suite
If you are seeking a solution for PDF editing on G suite, CocoDoc PDF editor is a suggested tool that can be used directly from Google Drive to create or edit files.
- Find CocoDoc PDF editor and install the add-on for google drive.
- Right-click on a chosen file in your Google Drive and select Open With.
- Select CocoDoc PDF on the popup list to open your file with and allow CocoDoc to access your google account.
- Make changes to PDF files, adding text, images, editing existing text, mark with highlight, retouch on the text up in CocoDoc PDF editor and click the Download button.
PDF Editor FAQ
Can you show me how to integrate by using partial fractions?
Oh, I’d be happy to.Integration by partial fractions has a peculiar pedagogical pedigree. It is often presented in calculus courses as a way of integrating rational functions, but it is rarely accompanied by a complete theoretical description – it’s more like a messy recipe with various cases which suffice for the final exam. The presentation leaves many students unsure what exactly is it that they’re doing, what the general case is like, and why. This is usually left unresolved throughout undergrad and graduate studies.I can’t think of another instance where a basic result or technique is taught in such a half-assed way. Curriculum designers often choose to leave or include this or that in an undergrad program, but when something is included, it is usually included properly. The situation with partial fractions is akin to learning about the Jordan canonical form in the real case only, and merely through a few examples. It’s a travesty.In the paper by Bradley and Cook I cite below, they write:Analysts view this as pure algebra (which it is) so it should not be addressed in a course on analysis. Algebraists typically view partial fractions as a technique only good for integrating and thus a problem for analysts. So, outside of the realm of symbolic computation, the partial fraction decomposition tends to never be fully discussed.This doesn’t bother all students, perhaps not even most students, but it drives people like me insane. I can’t comprehend anything if I don’t see the general context and complete proofs. Scattered techniques like this make me feel like my knowledge has decreased, rather than increased.So, here’s my attempt at addressing this. I’ll try to explain the idea behind partial fraction decomposition in various contexts, as well as its (very legitimate) connection to integration, and perhaps a few other uses if we have time. I can’t do the topic full justice in an answer; I will merely be able to outline how I would teach this if I were to.Shall we?RepresentationsEveryone knows that natural numbers can be written down as usual, in decimal notation, like [math]196884[/math], or through their prime factorization, as in [math]2^2 \times 3^3 \times 1823[/math]. Most people (I think) also understand that various things are easier or harder to do in either form of notation.Many students of math also know that something very similar is true of polynomials. A (single variable) polynomial can be written down as [math]x^8 [/math][math][/math][math]+ 2 x^7 - 2 x^6 - 6 x^5 [/math][math][/math][math]+ 6 x^3 [/math][math][/math][math]+ 2 x^2 - 2 x - [/math][math]1[/math][math][/math], a sum of powers of [math]x[/math] with coefficients, or as a product of irreducible factors with possible multiplicity, as in [math](x^2+1)^2(x-1)^2[/math]. Once again, certain tasks become easier with either form of presentation.Now, what about rational numbers? And what about rational functions, which are the analogous thing for polynomials?We usually write rational numbers like [math]\frac{77}{12}[/math]: a numerator and a denominator expressed as decimals. Of course we can factor both numbers and write [math]\frac{7 \times 11}{2^2 \times 3}[/math]. Same thing.But there are other ways. One form is a continued fraction, as in[math]\displaystyle \frac{77}{12} = 6+\frac{1}{2+\frac{1}{2+\frac{1}{2}}}[/math]This is also (extremely) useful in various ways, but we won’t go there now. Are there other ways?Well, yes there are.For reasons which, I think, aren’t fully understood, early texts from Egypt’s Middle Kingdom exhibit a strong preference to represent fractions as sums of distinct unit fractions, as in[math]\displaystyle \frac{29}{24} = \frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{8}[/math].Such representations, dubbed “Egyptian Fractions”, do sometimes have a utility, but they aren’t important enough nowadays to be taught as a systematic way of representing rational numbers. We should note the idea, though: we are decomposing a rational number into a sum of fractions which have a certain simplicity. They all have a numerator of [math]1[/math][math][/math].Also, let’s note something about the denominators in that representations: [math]2, 3, [/math][math]4[/math][math][/math] and [math]8[/math][math][/math] all divide [math]24[/math]. This isn’t surprising, when you think about what happens when you add up fractions. We’ll see how this plays out later.What would be the analogous thing for rational functions? And why would it be useful?Integrating Rational FunctionsFirst, a reminder: a rational function is a ratio of two polynomials, such as[math]\displaystyle \frac{x^5-2x^3+1}{x^2-3}[/math].In general, we write[math]\displaystyle R(x) = \frac{f(x)}{g(x)}[/math]where [math]f,g[/math] are polynomials.Rational functions come up in many different contexts, both in pure math and in applications. Just like rational numbers, they can be represented in different ways, so first let us motivate why we would go looking for a representation as a sum of simple fractions.When we learn about derivatives, we quickly see that differentiating any expression is a purely mechanical process. This is because, once we know to differentiate [math]f[/math] and [math]g[/math], we can also handle [math]f+g[/math], [math]fg[/math], and also [math]f \circ g[/math] which is the composition [math]f(g(x))[/math].Then we start going the other way, finding antiderivatives: given a function [math]f[/math], we seek a function [math]F[/math] with [math]F'=f[/math]. This is usually denoted as [math]F=\int f(x)\,\mathrm{d}x[/math], an indefinite integral. And here things get complicated.Indefinite integrals are linear, so they behave nicely with respect to addition and scalar multiplication. But unfortunately, that’s pretty much it. If you know the antiderivatives of [math]f[/math] and [math]g[/math], there’s generally no way to determine the antiderivatives of [math]fg[/math] or [math]f\circ g[/math] and so on.We do know how to integrate polynomials, and even some very simple rational functions, because we all know that [math]\frac{d}{dx}x^n = nx^{n-1}[/math], and so[math]\displaystyle \int x^k \,\mathrm{d}x = \frac{1}{k+1}x^{k+1}[/math]for any integer [math]k \ne -1[/math], while[math]\displaystyle \int x^{-1} \,\mathrm{d}x = \log(x)[/math]where [math]\log[/math] denotes the natural logarithm. As a result, we can also handle [math]\int (x-a)^k\,\mathrm{d}x[/math] for any number [math]a[/math] and integer [math]k[/math].Ok great. But what about more complicated rational functions, like [math]\frac{x^3+x+5}{x^2-1}[/math]?Well, obviously, all we need to do is manage to write every rational function as a sum of those simple ones, like [math]\frac{1}{(x-a)^k}[/math], or even [math]\frac{17}{(x-a)^k}[/math], since we aren’t worried about multiplicative constants. Integrating sums is easy, even with whatever multiplicative constants (aka linear combinations).Note well: the “simple” fractions we seek have very simple numerators, just numbers; and denominators which are just powers of a linear polynomial. As we shall see, this works perfect over the complex numbers, but another complication is introduced if we insist on working with real numbers only.Partial Fraction Decomposition: Fooling AroundBefore we do the general case, let’s look at some simple examples.[math]\displaystyle \frac{1}{x-1}[/math]This one is already fine. We know how to integrate [math](x-1)^{-1}[/math].[math]\displaystyle \frac{x}{x-1}[/math]Notice how merely changing the numerator from [math]1[/math][math][/math] to [math]x[/math] makes the integral non-obvious. However, there’s an easy fix:[math]\displaystyle \frac{x}{x-1} = \frac{x-1+1}{x-1}=1+\frac{1}{x-1}[/math]and similarly[math]\displaystyle \frac{x^2}{x-1} = \frac{(x+1)(x-1)+1}{x-1} =(x+1)+\frac{1}{x-1}[/math]what’s going on here? This should remind you of something: we’re simply replacing the numerator with a quotient and remainder upon division by the denominator.[math]\displaystyle \frac{19}{6} = \frac{3 \times 6 [/math][math][/math][math]+ 1}{6} = 3+\frac{1}{6}[/math]In a similar way, we re-wrote the numerator [math]x^2[/math] as [math]q(x)(x-1)+r(x)[/math], a quotient and a remainder. A basic skill you should have before tackling partial fractions is polynomial division with remainder, aka the Euclidean Algorithm for polynomials. It really works the exact same way as with numbers.The general conclusion here is that we can handle anything of the form[math]\displaystyle \frac{f(x)}{x-1}[/math]by simply dividing [math]f(x) = q(x)(x-1)+r(x)[/math], and so[math]\displaystyle \frac{f(x)}{x-1} = q(x)+\frac{r(x)}{x-1}[/math]In fact, since the denominator has degree [math]1[/math][math][/math], the remainder [math]r(x)[/math] is guaranteed to simply be a constant – just a number. We have no trouble integrating this.Ok, let’s move on.[math]\displaystyle \frac{1}{(x-1)^2}[/math]Still no problem. This is [math](x-1)^{-2}[/math]. Easy to integrate.[math]\displaystyle \frac{x}{(x-1)^2}[/math]Hmmm. Polynomial division won’t save us here, because the numerator already had a smaller degree than the denominator. What do we do?Here comes one of the basic ideas of Partial Fraction Decomposition. We can’t possibly hope to rewrite [math]\frac{x}{(x-1)^2}[/math] as simply [math]\frac{a}{(x-1)^2}[/math] for some number [math]a[/math], because [math]x[/math] isn’t a constant. But we might still hope to manage something like this:[math]\displaystyle \frac{x}{(x-1)^2} = \frac{a}{x-1}+\frac{b}{(x-1)^2}[/math]Why is that reasonable? Well, adding together the fractions on the right, we find[math]\displaystyle \frac{a}{x-1}+\frac{b}{(x-1)^2} = \frac{a(x-1)+b}{(x-1)^2}[/math]The denominator is simply [math](x-1)^2[/math], which is what we need, and the numerator looks darn close to a general polynomial of the first degree. Great: we can surely adjust [math]a[/math] and [math]b[/math] so they make [math]a(x-1)+b[/math] be whatever we want, such as our original numerator which happened to be [math]x[/math]. Clearly we want [math]a=1[/math], and then we get [math](x-1)+b[/math], so we want [math]b=1[/math] as well.[math]\displaystyle \frac{x}{(x-1)^2} = \frac{1}{x-1}+\frac{1}{(x-1)^2}[/math]The cool thing is, we could have done the exact same thing with [math]\frac{mx+n}{(x-1)^2}[/math] for any numbers [math]m,n[/math]. We just need to match [math]a(x-1)+b[/math] with [math]mx+n[/math], which is no trouble at all.And the coolest thing is, this lets us handle numerators up to degree [math]1[/math][math][/math], and numerators of degree [math]2[/math] and beyond can be handled as before, by first dividing by the denominator, being left with a remainder of lower degree – meaning, the first degree! We are able to exactly match the freedom we need.You may guess how this is going to go from here. For things like [math]\frac{f(x)}{(x-1)^3}[/math], we will first reduce [math]f(x)[/math] to a remainder of the second degree, and then handle it by finding appropriate constants [math]a,b,c[/math] to make [math]\frac{a}{x-1}+\frac{b}{(x-1)^2}+\frac{c}{(x-1)^3}[/math] do what we need it to do.Ok. So far the denominators were all [math](x-1)^k[/math] for some power [math]k[/math]. The same thing obviously works for [math](x-u)^k[/math] for any [math]u[/math], but what about more complicated denominators? For example, what are we supposed to do with[math]\displaystyle \frac{1}{x^2-1}[/math]?Well, [math]x^2-1[/math] factors as [math](x-1)(x+1)[/math]. Does this help? Can we hope to achieve[math]\displaystyle \frac{1}{x^2-1} = \frac{a}{x-1}+\frac{b}{x+1}[/math]?Why, sure. Again, let’s just add up those guys on the right, getting [math]\frac{a(x+1)+b(x-1)}{(x-1)(x+1)}[/math]. The numerator can be adjusted to be any linear polynomial, and that’s all we need since the denominator has degree [math]2[/math].What’s the general lesson? The general lesson is that the denominator [math]g(x)[/math] should be broken down into a product of [math](x-u)[/math]'s, and then we rewrite the fraction as a sum of simple things over these [math](x-u)[/math]'s individually. We’ve also learned that if any [math](x-u)[/math] appears with multiplicity, as in [math](x-u)^k[/math], we should allow for denominators of [math](x-u)^j[/math] for [math]j=1,2,\ldots,k[/math], to get enough freedom to match our numerators.Partial Fractions over the Complex NumbersBut… can we factor any polynomial [math]g(x)[/math] into linear factors, as in [math]g(x)=(x-u)^k(x-v)^l\ldots(x-w)^m[/math]? Can we?Well, sure we can. That’s just what complex numbers were made for. Over the complex numbers, any polynomial has just as many roots as its degree, or in other words, every polynomial factors completely into linear factors.Therefore, over the complex numbers, the random meanderings we just went through are completely sufficient to achieve a partial fraction decomposition of any rational function. It can have rational coefficients, it can have real coefficients, it can even have complex coefficients, doesn’t matter – it’s totally doable.Start with[math]\displaystyle R(x) = \frac{f(x)}{g(x)}[/math]If the degree of [math]f(x)[/math] is greater than that of [math]g(x)[/math], use long division to write [math]f(x) = q(x)g(x)+r(x)[/math] where [math]r[/math] has lower degree. Then[math]\displaystyle R(x) = q(x) [/math][math][/math][math]+ \frac{r(x)}{g(x)}[/math]and now the numerator [math]r(x)[/math] is smaller, in terms of degree, than the denominator.Next, factor [math]g(x) = (x-u_1)(x-u_2)\cdots(x-u_n)[/math]. The roots [math]u_1,u_2,\ldots,u_n[/math] may not be distinct, so a better way to write this is[math]\displaystyle g(x) = (x-u_1)^{m_1}\cdots(x-u_k)^{m_k}[/math]where [math]u_1,\ldots,u_k[/math] are the distinct roots and [math]m_1,\ldots,m_k[/math] are their multiplicities. We expect to need simple fractions that look like [math]\frac{a_{ij}}{(x-u_i)^j}[/math] where [math]a_{ij}[/math] are just some numbers, nothing fancy, and the denominators are all powers of [math](x-u_i)[/math], never exceeding the appropriate multiplicity, meaning [math]j =1,2,\ldots,m_i[/math].So the overall decomposition we expect looks like this:[math]\displaystyle \frac{r(x)}{g(x)} = \sum_{i=1}^k \sum_{j=1}^{m_i} \frac{a_{ij}}{(x-u_i)^j}[/math]This looks a bit scary, but seriously, it’s nothing. You merely write out the right hand side, add things up, and solve for the [math]a_{ij}[/math] to make them fit your actual numerator [math]r(x)[/math]. It’s nothing worse than solving linear equations, and you are guaranteed to succeed since there are just enough parameters [math]a_{ij}[/math] to fit the numerator [math]r(x)[/math].A formal proof of this can be written down in several ways. Since this answer is long enough as it is, let me point you to a wonderful paper by Bradley and Cook (from, incredibly, 2012!), which does everything with complete rigor.This Just Isn’t Hard Enough: Real NumbersThe way partial fractions are taught in most calculus or real analysis classes involves one more layer of complication, because teachers often insist on doing everything with real numbers. Honestly, in my mind, this adds unnecessary stress on top of something which is already notationally cumbersome, even if it’s not really complicated. Complex numbers are just awesome in how they let you factor any polynomial down to linear factors – why not use them?Here’s what actually happens in the classroom. Using the general method we just outlined, we find[math]\displaystyle \frac{1}{x^2-1} = \frac{1/2}{x-1}-\frac{1/2}{x+1}[/math]so we can integrate [math]\frac{1}{x^2-1}[/math] easily using logs.[math]\displaystyle \int \frac{1}{x^2-1}\,\mathrm{d}x = \frac{1}{2}\log(x-1)-\frac{1}{2}\log(x+1)[/math].Similarly,[math]\displaystyle \frac{1}{x^2+1} = \frac{-i/2}{x-i}+\frac{i/2}{x+i}[/math]so[math]\displaystyle \int \frac{1}{x^2+1}\,\mathrm{d}x = \frac{i}{2}\log(x+i)-\frac{i}{2}\log(x-i) [/math][math][/math][math][/math]but that’s not the answer most teachers want, unfortunately. They want [math]\text{arctan}(x)[/math] or [math]\text{atan}(x)[/math] or [math]\tan^{-1}(x)[/math] or whatever they call the inverse tangent function.Now of course, it’s the same thing. The expression we just wrote down with logs and [math]i[/math] is exactly the inverse tangent. An antiderivative is an antiderivative; there can’t be two of them, except for additive constants. But we’re supposed to fear the complex number, so instead of doing it the right and easy way, we complicate things more.Remember: partial fraction decomposition is really quite straightforward with complex numbers. You factor the denominator completely, you write down the required denominators to cover the multiplicities, done. Integrating is never more complicated then knowing how to integrate [math](x-u)^k[/math], which is really easy.But if you insist on doing things with real numbers only, you still can… at a cost. A polynomial with real coefficients cannot be factored into linear factors with real coefficients, but it can factored into linear and quadratic factors with real coefficients. (Why? Because the complex non-real roots of a real polynomial must come in conjugate pairs, and multiplying these pairs out gives you real quadratics).That’s not the end of the world. It just means that now, even in the absence of those annoying multiplicities, we will need to allow terms such as[math]\displaystyle \frac{ax+b}{x^2+ux+v}[/math]in our partial fraction decomposition. If we also have multiplicities, we will likewise need to allow[math]\displaystyle \frac{ax+b}{x^2+ux+v}+\frac{cx+d}{(x^2+ux+v)^2}[/math]or additional terms like this if the multiplicity is higher. The process of figuring out the coefficients now becomes correspondingly more cumbersome, but this is really just a nuisance.In terms of integration, it also means that we need to know how to integrate those simple fractions: linear numerators and quadratic denominators. This isn’t hard either, using a few standard trigonometric integrals.The general philosophy of partial fraction decomposition for rational functions over the real numbers is still the same: we write a general rational function as a sum of a polynomial and simple fractions, where “simple” now means having powers of linear of quadratic terms in the denominators, and nothing worse than linear polynomials in the numerators. Since we know how to integrate those, we can integrate any rational function.The General CaseIt is tempting to try and generalize everything we just did. We’ve handled rational functions over the real and complex numbers; what about other fields? And what if, instead of polynomials, we look at fractions made of elements of other rings?One of the simplest steps in our process was the reduction of the numerator into something of lower degree than the denominator. This concept of “degree” generalizes nicely into something called Euclidean domains, where you have a “degree” function and a guaranteed division with a remainder which has smaller degree than what you’re dividing by.In the general case, we cannot hope to control the degree of the irreducible factors of the denominators. Above, we made good use of the fact that complex numbers guarantee linear factors only, and real numbers guarantee linear or quadratic ones. In general, we don’t have a limit on the degrees of the irreducible factors, but that’s ok: we can still achieve a partial fraction decomposition where the denominators are powers of irreducible factors.This helps clarify things, I think. The denominators in the approach as it’s usually taught are a hodge-powers of linear and quadratic factors with various powers. The underlying mechanism is irreducibility: you factor the denominator as much as you can, and allow for power of these factors in the decomposition.One interesting quirk is that, in those more general cases, the decomposition ends up being non-unique. There doesn’t seem to be a simple recipe to define precisely the conditions on the terms that would guarantee uniqueness. The decomposition always exists, but it’s not necessarily unique in any obvious sense.For the purposes of finding antiderivatives, this really isn’t a concern at all. The decomposition we’ve described over [math]\C[/math] and [math]\R[/math] is, in fact, unique, but it doesn’t really need to be: we know from other considerations that an antiderivative must be unique up to an additive constant (since [math]f'=g'[/math] implies [math](f-g)'=0[/math], so [math]f-g[/math] is a constant). But it’s an interesting observation nonetheless. Uniqueness is often harder than mere existence, and it’s equally often very much necessary.To go back to the context we started with, way back at the top of this answer: what can we say about the partial fraction decomposition of ordinary rational numbers? Well, it does always exist, and it is actually unique if we insist on always taking positive remainders. We get things like[math]\frac{77}{12}=5+\frac{1}{2}+\frac{1}{2^2}+\frac{2}{3}[/math]note that this isn’t the “Egyptian Fraction” decomposition from ancient Egypt, but we do see how the numerators are smaller than the denominators. Also observe how we subtly needed to reduce the integer part from [math]6[/math] to [math]5[/math] in order for the decomposition to work. It’s a fascinating way of representing rational numbers, but unfortunately it’s not particularly useful (as far as we know), so we never teach it, which leaves the calculus technique sort of hanging there without context or precedent, which is really a shame. It’s a simple, and very useful, technique.
Is there anything smaller than a Planck length?
Contrary to the (very) popular belief, the Planck length has not been proven to be the smallest possible unit of space.The Planck length is part of a series of units called the Planck units, which were, unsurprisingly, developed by the famous physicist Max Planck[1][1][1][1].To develop these units, you begin with 5 fundamental constants:The speed of light, [math]c = 299792458[/math]ms[math]^{-1}[/math] [2]The gravitation constant, [math]G = 6.674 08 \times 10^{-11} [/math]m[math]^3[/math] kg[math]^{-1}[/math] s[math]^{-2}[/math] [3]The reduced Planck’s constant, [math]\hbar =1.054 571 800 \times 10^{-34} [/math]kg m[math]^2[/math] s[math]^{-1} [/math] [4]The electric constant, [math]\frac{1}{4 \pi \epsilon_0} = [/math][math]8.9875517873681764\times10^9 [/math]kg m[math]^3 [/math]s[math]^{−4}[/math] A[math]^{−2}[/math] [5]The Boltzman constant, [math]k_B = [/math][math]1.38064852 \times 10^{−23} [/math]kg m[math]^2[/math]s[math]^{-1} [/math]K[math]^{−1}[/math] [6]To produce a Planck unit, you then simply need to work out what combination of these 5 constants you need.Let’s say we want to define the Planck time, [math]t_p[/math].Obviously, [math]t_p[/math] has units of time — so [math][t_p] = T[/math], by dimensional analysis[7][7][7][7].We now construct the following:[math]t_p = c^\alpha G^\beta \hbar^\gamma \left(\frac{1}{4 \pi \epsilon_0} \right)^\delta k_B^\eta[/math]Where the Greek letters are unknown constants.Looking at the units in the list above, we can then write:[math][t_p] = [c]^\alpha [G]^\beta [\hbar]^\gamma \left[\frac{1}{4 \pi \epsilon_0} \right]^\delta [k_B]^\eta[/math][math]T =[/math] [math]\left(LT^{-1} \right)^\alpha \times[/math] [math]\left(L^3 M^{-1} T^{-2} \right)^\beta \times[/math] [math]\left(M L^2 T^{-1} \right)^\gamma \times[/math] [math]\left( M L^3 T^{-4} Q^{-2} \right)^\delta \times[/math] [math]\left( M L^2 T^{-1} \Theta^{-1} \right)^\eta[/math]By matching all of our terms, we then get:[math]T^1 = L^{\left(\alpha [/math][math][/math][math]+ 3\beta [/math][math][/math][math]+ 2\gamma [/math][math][/math][math]+ 3\delta [/math][math][/math][math]+ 2\eta\right)} T^{\left(-\alpha -2\beta - \gamma -4 \delta -\eta\right)} M^{\left(-\beta +\gamma [/math][math][/math][math]+ \delta [/math][math][/math][math]+ \eta\right)} Q^{-2\delta} \Theta^{-\eta}[/math]By inspection, we can immediately see that since [math]Q[/math] and [math]\Theta[/math] do not appear on the left hand side, that [math]\delta = \eta = 0.[/math]We are left with three equations:[math]T: \quad [/math][math]1[/math][math] = - \alpha - 2\beta - \gamma[/math][math]L[/math][math]: \quad 0 = \alpha [/math][math][/math][math]+ 3 \beta [/math][math][/math][math]+ 2 \gamma[/math][math]M: \quad 0 = - \beta [/math][math][/math][math]+ \gamma[/math]From (M), we see that [math]\beta = \gamma[/math]From (L), we see that [math]\alpha = - 5 \beta = - 5 \gamma[/math]From T, we see that [math]1[/math][math] = \left(5 -2 -1\right) \beta[/math]Putting this all together:[math]\alpha = -\frac{5}{2} \quad \quad \beta = \frac{1}{2} \quad \quad \gamma = \frac{1}{2}[/math]Therefore:[math]\large \boxed{t_p = \sqrt{\frac{\hbar G}{c^5}}}[/math]It is then a simple matter to see that if we want the Planck length, since [math]v = \frac{d}{t}[/math], we expect [math]l_p = v_p \times t_p[/math] — but the Planck speed is the speed of light!Therefore:[math]l_p = \sqrt{\frac{\hbar G}{c^3}}[/math]That is how you derive the Planck length.Nothing to do with anything fundamental about the nature of space!We are literally just multiplying units together, and seeing what combination gives us the units we need.You can also generate the Planck mass ([math]m_p = \sqrt{\frac{\hbar c}{G}})[/math], planck charge ( [math]q_p = \frac{e}{\sqrt{\alpha}}[/math]) and the Planck temperature ([math]\Theta = \sqrt{\frac{\hbar c^5}{G k_b^2}}[/math]) — from which you can derive everything from acceleration, to power, to voltage!The Planck units were established because they simplify many of the more fundamental equations — if you write down your equations in Planck units, you can do away with many physical constants and not have to worry about dimensions.Newton’s law of gravitation becomes:[math]F = \frac{G m_1 m_2}{r^2} \mapsto F = \frac{m_1 m_2}{r^2}[/math]Mass-Energy equivalence becomes:[math]E = mc^2 \mapsto E = m[/math]And so on and so forth. This is a process called nondimensionalization[8][8][8][8] — and is often used in theoretical physics, because these multiplicative constants are just artefacts of our measuring systems — they don’t actually contain any information.So — why does this myth persist?It is true that several people have estimated that the scale of the Planck length is roughly the order of magnitude around which the structure of spacetime becomes dominated by quantum effects — or roughly the scale of the “quantum foam”.Please note the phrase “roughly the scale of”.Human body height is roughly of the scale of [math]1[/math][math][/math]m. But if someone told you that all humans were 1m tall, you would look at them like they were madmen!There’s also a certain amount of arbitrarity in the choice of the base units we used to construct the Planck units — note that we use the reduced Planck constant, [math]\hbar = \frac{h}{2 \pi}[/math]. There’s no reason not to use [math]h[/math] — the difference is a (non-dimensional) factor of [math]2 \pi[/math]. The same can be said of the factor of [math]4[/math][math] \pi[/math] in the electric constant.It’s difficult to ascribe such fundamental importance to a series of units where you can multiply by [math]2\pi[/math] and leave the result unchanged…So — what is true?Some people[9][9][9][9] have estimated that on the scale of [math]10^{-35}[/math] metres, any further attempt to probe to a smaller length scale will have no effect (adding more energy will instead create micro black holes, is one guess).Somebody then went “huh - that’s funny — remember those units that guy came up with 100 years ago? The length in that unit is approximately [math]10^{-35}[/math]metres.”That somehow got morphed into “OMG the Planck unit is the smallest possible unit of space!”I’ve then also seen this then applied to the Planck time — people then claim that the Planck time is the smallest possible unit of time….which is nonsense. There’s no reason to assert that.Also I’ve seen the same assertion made about the Planck charge — that it is the smallest possible unit of charge. Except, [math]q_p = \frac{e}{\sqrt{\alpha}}[/math], where [math]\alpha \approx \frac{1}{137}[/math] …. so the electron charge ([math]-e[/math]) is several times smaller than the Planck charge! It’s trivial to show that this statement is utter nonsense.Bonus points go to the person (who shall remain nameless) who then asserted that since [math]l_p[/math] and [math]t_p[/math] are the smallest time and length scales, since the speed of light is [math]c = \frac{l_p}{t_p}[/math], that’s why the speed of light is an invariant….I mean….wow.It’s true that these time and length scales are so tiny not to be physically meaningful to us as humans— but at the moment, there is no evidence to attribute to them any special significance other than that which is seemingly coincidental.So to the question — is anything smaller than a Planck length?Well - I can give you a physical result which has a value smaller than the Planck length.In 1973, Jacob Bekenstein published a paper where he showed that the surface area of a black hole increases by [math]1[/math][math] A_p[/math] for every bit of information which crosses the event horizon[10][10][10][10] . [math]1[/math][math] A_p[/math] is the Planck Area — equal to [math]l_p^2[/math].The surface area of a sphere is given by [math]4[/math][math] \pi R^2[/math] .Therefore, the new surface area is given by [math]4[/math][math] \pi R_{new}^2 = [/math][math]4[/math][math] \pi R_{old}^2 [/math][math][/math][math]+ l_p^2[/math]Then:[math]R_{new} = \sqrt{R_{old}^2 [/math][math][/math][math]+ \frac{l_p^2}{4 \pi}}[/math]Therefore the change in radius is given by [math]\delta R = R_{new} - R_{old}[/math][math]\delta R = R_{old} \sqrt{1 [/math][math][/math][math]+ \frac{l_p^2}{4 \pi R_{old}^2}} - R_{old}[/math]We then expand the brackets (since [math]R_{old} \ggg l_p[/math] for any black hole), such that:[math]\delta R \approx R_{old} \left( [/math][math]1[/math][math] [/math][math][/math][math]+ \frac{l_p^2}{8 \pi R_{old}^2}\right) - R_{old}[/math]Therefore:[math]\delta R \approx \frac{l_p^2}{8 \pi R_{old}} \ll l_p[/math]Therefore, via Bekenstein’s result, the change in radius of a black hole when 1 bit of information is added is much smaller than 1 Planck length (given that [math]R[/math] for a typical black hole will be on the orders of thousands of km’s — approx [math]10^{40} l_p[/math])The Academic Space — A blog for academic-level & evidence-based answers.Footnotes[1] Anzeige[1] Anzeige[1] Anzeige[1] Anzeige[2] https://physics.nist.gov/cgi-bin/cuu/Value?c|search_for=speed+of+light[3] Newtonian constant of gravitation[4] Planck constant over 2 pi[5] electric constant[6] Boltzmann constant[7] Jack Fraser's answer to Why is the energy of a particle [math]\frac{1}{2}mv^2[/math]?[7] Jack Fraser's answer to Why is the energy of a particle [math]\frac{1}{2}mv^2[/math]?[7] Jack Fraser's answer to Why is the energy of a particle [math]\frac{1}{2}mv^2[/math]?[7] Jack Fraser's answer to Why is the energy of a particle [math]\frac{1}{2}mv^2[/math]?[8] Nondimensionalization - Wikipedia[8] Nondimensionalization - Wikipedia[8] Nondimensionalization - Wikipedia[8] Nondimensionalization - Wikipedia[9] http://www.nature.com/scientificamerican/journal/v292/n5/full/scientificamerican0505-48.html?foxtrotcallback=true[9] http://www.nature.com/scientificamerican/journal/v292/n5/full/scientificamerican0505-48.html?foxtrotcallback=true[9] http://www.nature.com/scientificamerican/journal/v292/n5/full/scientificamerican0505-48.html?foxtrotcallback=true[9] http://www.nature.com/scientificamerican/journal/v292/n5/full/scientificamerican0505-48.html?foxtrotcallback=true[10] Black Holes and Entropy[10] Black Holes and Entropy[10] Black Holes and Entropy[10] Black Holes and Entropy
What is the integration of [math]\dfrac{1}{x}[/math]?
On the one hand, when we integrate [math]x, x^2, x^3, x^4[/math] and so on we get a nice, predictable pattern and on the other hand, when we integrate [math]x^{-2}, x^{-3}, x^{-4}[/math] and so on we also get a nice, predictable pattern.However, when we try to integrate [math]x^{-1}[/math] then we get … a hiccup in the pattern.What is going on here?It is understandable that even when people are told what the answer is they, nonetheless, walk away confused. My answer to this type of confusions - put on the boots and walk to the top - yourself, every inch of the way.Between the above two families of integrals there lies a discovery - of a new function by the Scott John Napier (1550–1617) and the Swiss Jost Burgi (1552–1632).To be historically correct, Napier and Burgi did not give their definitions of a logarithmic function in terms of a square area under a hyperbola - this is our, modern, treatment of the subject which, in a college setting (and depending on the preferences of the course’s author), may take about two full 45-minute lectures to cover. As such, I will just highlight a path from here to there and, to develop your intuition further, you are encouraged to fill in the gaps yourself.So let us pretend that we have a hunch that the integral in question comes out to be a mysterious function, name it [math]L[/math][math][/math][math][/math][math]()[/math] for now. What interesting properties does [math]L[/math][math][/math][math][/math][math]()[/math] have or, better yet, what properties of [math]L[/math][math]()[/math] can be deduced?Start with delineating a curvilinear trapezoid trapped by the graph of [math]f(x) = \dfrac{1}{x}[/math], the [math]x[/math]-axis and the two verticals [math]x = a[/math] and [math]x = b[/math].Break the interval [math][a,b][/math] into [math]n, n \in \mathbb{N}[/math] line segments of equal length [math]\Delta x_n[/math] such that:[math]\Delta x_n = \dfrac{b - a}{n} \tag{1}[/math]Construct the Right Riemann Rectangles so that the North-Eastern vertex of the rectangle sits on the graph of [math]f(x)[/math]:Then the coordinate of the point [math]x_i[/math] within [math][a,b][/math] in our notation is:[math]x_i = a [/math][math][/math][math]+ \Delta x_n \cdot i = a [/math][math][/math][math]+ \dfrac{b - a}{n}\cdot i \tag{2}[/math]and the height of the corresponding rectangle then is:[math]f(x_i) = \dfrac{1}{x_i} = \dfrac{1}{a [/math][math][/math][math]+ \dfrac{b - a}{n}\cdot i} \tag{3}[/math]Consequently, the square area of the primitive [math]i[/math]-th rectangle then is:[math]A_i = \Delta x_n \cdot f(x_i) = \dfrac{b - a}{n} \cdot \dfrac{1}{a [/math][math][/math][math]+ \dfrac{b - a}{n}\cdot i} \tag*{}[/math]Multiply the [math]n[/math] in the denominator of the first factor through the denominator of the second factor and consolidate the two ratios into one:[math]A_i = \dfrac{b - a}{a(n-i) [/math][math][/math][math]+ bi} \tag{4}[/math]It is instructive to stare at (4) for a few seconds and ponder: what can be squeezed out of it?Observe the magic of numbers - if we divide the top and the bottom of (4) by [math]a[/math] then [math]A_i[/math] will depend only on the ratio of the interval’s boundaries:[math]A_i = \dfrac{\dfrac{b}{a} - 1}{(n-i) [/math][math][/math][math]+ \dfrac{b}{a}i} \tag{5}[/math]Note that the above feat will not work for any other reciprocal powers of [math]x[/math]. Try it.Since both [math]n[/math] and [math]i[/math] were chosen at will, if we repeat the above experiment for a different interval, say [math][c,d][/math], then the only thing that will change in (5) will be the lettering:[math]B_i = \dfrac{\dfrac{d}{c} - 1}{(n-i) [/math][math][/math][math]+ \dfrac{d}{c}i} \tag{6}[/math]Whatever manipulations are to be carried out in a traditional calculation of the square area of the region in question, they can be done in terms of the ratio [math]r[/math] of the interval’s boundaries:[math]r = \dfrac{b}{a} \tag*{}[/math]If we make this ratio our independent variable then it is natural to define our function [math]L(r)[/math] as a square area under the hyperbola in question bounded by the verticals [math]x = [/math][math]1[/math][math][/math] and [math]x = r[/math] - that is when the region lies to the right of [math]x = [/math][math]1[/math][math][/math]. And when the region lies to the left of [math]x = [/math][math]1[/math][math][/math] let us agree to take the square area with a negative sign:[math]L(r) > 0, [/math][math][/math][math]\; r > [/math][math]1[/math][math] \tag{7}[/math][math]L(r) < 0, [/math][math][/math][math]\; 0 < r < [/math][math]1[/math][math] \tag*{}[/math]Further, it should make sense to put:[math]L(1) = 0 \tag*{}[/math]right? It’s just a square area of a curvilinear trapezoid of width zero. For further references let us agree that the geometric notation of [math]A(1,r)[/math] denotes the square area of the region bounded by the hyperbola, the [math]x[/math]-axis and the verticals [math]x = [/math][math]1[/math][math][/math] and [math]x = r[/math].Next, from our deductions in (5) and (6) we argue that: Property P1: if the boundaries of two intervals are in the same ratio then the corresponding square areas are equal:[math]\text{if} [/math][math][/math][math]\; \dfrac{b}{a} = \dfrac{d}{c} [/math][math][/math][math]\; \text{then} [/math][math][/math][math]\; A(a,b) = A(c,d) \tag{8}[/math]We are now in a position to prove the next property (P2) of [math]L[/math][math]()[/math]: for any two real positive numbers [math]r_1[/math] and [math]r_2[/math]:[math]L(r_1\cdot r_2) = L(r_1) [/math][math][/math][math]+ L(r_2) \tag{9}[/math]The proof is not particularly brilliant - we just slug it out through all the possible scenarios: 1) [math]r_1, r_2 > [/math][math]1[/math][math][/math]; 2) [math]r_1 \cdot r_2 = [/math][math]1[/math][math][/math]; 3) [math]r_1, r_2 < [/math][math]1[/math][math][/math]; 4) [math]r_1 > [/math][math]1[/math][math], r_2 < [/math][math]1[/math][math][/math] (or vice versa).Let’s do the cases 1) and 2) together.Case 1). Let [math]r_1 > [/math][math]1[/math][math][/math] and [math]r_2 > [/math][math]1[/math][math][/math] and take it that [math]r_2 > r_1[/math] without the loss of generality. Our plan of attack is to exploit P1.On the one hand, by our definition, we can partition the square area in question, since [math]f(x)[/math] (the hyperbola) is continuous, flush with [math]r_2[/math]: it is a square area [math]A(1, r_2)[/math] between [math]1[/math][math][/math] and [math]r_2[/math] plus the square area [math]A(r_2, r_1\cdot r_2)[/math] between [math]r_2[/math] and [math]r_1\cdot r_2[/math]:Symbolically:[math]A(1, r_1\cdot r_2) = A(1, r_2) [/math][math][/math][math]+ A(r_2, r_1\cdot r_2) \tag{10}[/math]On the other hand:[math]r_1 = \dfrac{r_1}{1} = \dfrac{r_1}{1}\cdot [/math][math]1[/math][math] = \dfrac{r_1}{1}\cdot \dfrac{r_2}{r_2} = \dfrac{r_1r_2}{r_2} \tag*{}[/math]or, to put the above into the form required by P1:[math]\dfrac{r_1}{1} = \dfrac{r_1r_2}{r_2} \tag*{}[/math]Verbally: according to P1 the square area [math]A(1, r_1)[/math] between [math]1[/math][math][/math] and [math]r_1[/math] is equal to the square area [math]A(r_2, r_1\cdot r_2)[/math] between [math]r_2[/math] and [math]r_1\cdot r_2[/math]:Symbolically:[math]A(1, r_1) = A(r_2, r_1\cdot r_2) \tag{11}[/math]Replacing the last term in (10) with its equivalent from (11), we obtain the desired result:[math]A(1, r_1\cdot r_2) = A(1, r_1) [/math][math][/math][math]+ A(1, r_2) \tag*{}[/math]or:[math]L(r_1\cdot r_2) = L(r_1) [/math][math][/math][math]+ L(r_2) \tag*{}[/math]Case 2). Let [math]r_1\cdot r_2 = [/math][math]1[/math][math][/math] and without the loss of generality assume that [math]r_1 > [/math][math]1[/math][math][/math] and then:[math]r_2 = \dfrac{1}{r_1} < [/math][math]1[/math][math] \tag*{}[/math]By the definition in (7):[math]L(1) = 0 = L\Big(r_1\cdot \dfrac{1}{r_1}\Big) \tag*{}[/math]But:[math]\dfrac{r_1}{1} = \dfrac{1}{\dfrac{1}{r_1}} \tag*{}[/math]and from P1 we have (don’t forget our agreement to take the square areas to the left of the vertical [math]x = [/math][math]1[/math][math][/math] with the minus sign):[math]A(1, r_1) = -A\Big(\dfrac{1}{r_1}, 1\Big) \tag*{}[/math]Geometrically: the two square areas “symmetrical” with respect to the vertical [math]x = [/math][math]1[/math][math][/math] are equal (by magnitude):Symbolically:[math]L(r_1) = -L\Big(\dfrac{1}{r_1}\Big) \tag*{}[/math]from where (9) follows:[math]L(r_1) [/math][math][/math][math]+ L\Big(\dfrac{1}{r_1}\Big) = 0 = L(1) = L\Big(r_1\cdot \dfrac{1}{r_1}\Big) = L(r_1\cdot r_2) \tag*{}[/math]and so on.Next, we show (P3) that there exists this interesting number, name it [math]e[/math], such that:[math]L(e) = [/math][math]1[/math][math] \tag*{}[/math]By our definition [math]L(r)[/math] is increasing. If we can demonstrate that [math]L(2) < [/math][math]1[/math][math][/math] and [math]L(3) > [/math][math]1[/math][math][/math] then it’d follow that there must be a real number, [math]e[/math], such that [math]L(e) = [/math][math]1[/math][math][/math].First, the area of the unit square (highlighted in red) is clearly larger than [math]A(1, 2)[/math]:showing that [math]L(2) < [/math][math]1[/math][math][/math].Next, construct a tangent [math]\tau[/math] to the graph of [math]f(x)[/math] at the point [math]T(2, 0.5)[/math] and the corresponding trapezoid (highlighted in violet):From elementary geometry the square area of the above trapezoid is its height times the length of its midline:[math]2 \times \dfrac{1}{2} = [/math][math]1[/math][math] \tag*{}[/math]But by definition, the tangent [math]\tau[/math] has exactly one point of contact with the graph of [math]f(x)[/math]. Therefore, the square area [math]A(1, 3)[/math] of the corresponding curvilinear trapezoid (not shown) must be greater than [math]1[/math][math][/math]. As such, there must be a point [math]e[/math] between [math]2[/math] and [math]3[/math] where [math]L(e) = [/math][math]1[/math][math][/math].Lastly, we prove the following property (P4) of [math]L(r)[/math]: for any real numbers [math]r > 0[/math] and [math]p[/math]:[math]L(r^p) = pL(r) \tag*{}[/math]Again, the proof is not very elegant - we gradually build up the arsenal of numbers for which P4 holds by examining the [math]p[/math]s as natural numbers, negative integers, rationals and irrationals.For example, if [math]p[/math] is a natural number [math]n[/math] then:[math]L(r^2) = L(r\cdot r) = L(r) [/math][math][/math][math]+ L(r) = 2L(r) \tag*{}[/math][math]L(r^3) = L(r\cdot r^2) = L(r) [/math][math][/math][math]+ L(r^2) = L(r) [/math][math][/math][math]+ 2L(r) = 3L(r) \tag*{}[/math]and so on (use induction on [math]n[/math]).If [math]p[/math] is a negative integer then put [math]p = -n[/math], where [math]n[/math] is a positive integer. Then:[math]r^p = \dfrac{1}{r^n} \tag*{}[/math]and (from case 2 of P2):[math]L(r^p) = L\Big(\dfrac{1}{r^n}\Big) = -L(r^n) \tag*{}[/math]But for natural numbers we’ve proved that [math]L(r^n) = nL(r)[/math], therefore:[math]L(r^p) = -L(r^n) = -nL(r) = pL(r) \tag*{}[/math]and so on.If you go through all the remaining cases yourself then you should be able to gather enough convincing evidence that reveals the true nature of [math]L(r)[/math]:[math]L(r) = L(e^{\log_e(x)}) = \log_e(x)L(e) = \log_e(x) = \ln(x) \tag*{}[/math]and, in fact, we can use the integral in question as the very definition of the logarithmic function for [math]x > 0[/math]:[math]\displaystyle \ln(x) = \int_1^x \dfrac{1}{t}dt \tag*{}[/math]EDIT: in the comments below Anupam Nayak asked if I can go over the case when [math]p[/math] is irrational in P4.Sure:we need to prove that when [math]p[/math] is an irrational number then [math]L(r^p) = pL(r)[/math].To avoid circular reasoning we have to generate a proof from first principles and without the reliance on the known properties of the function [math]\log(x)[/math].Here’s one way to do that: real numbers have this property called Archimedean. It basically says that for any positive real number [math]x[/math], no matter how small, and for any other (distinct) positive real number [math]y[/math], no matter how large, there always exists a finite non zero natural number (a positive integer) [math]n[/math] such that [math]n\cdot x > y[/math]. The strict inequality is significant.Employing this property it can be shown that an arbitrary irrational number, say [math]p[/math] in our context, is the limit of a sequence of rational numbers.However, we choose not one but two limiting sequences of rational numbers between which we “squeeze” our irrational number [math]p[/math]: name one sequence [math]l[/math][math][/math] for “lower” and the other [math]h[/math] for “higher”. Then, the [math]l[/math][math][/math] sequence is:[math]l_1, l_2, l_3, \ldots l_n, \ldots [/math][math][/math][math] \tag*{}[/math]and the [math]h[/math] sequence is:[math]h_1, h_2, h_3, \ldots h_n, \ldots \tag*{}[/math]and by our construction:[math]l_n < p < h_n \tag*{}[/math]One practical way to construct such sequences is to just take more and more digits from the decimal expansion of [math]p[/math] for [math]l[/math][math][/math]. Say [math]p = \pi = 3.14159\ldots[/math]. Then take in turn: [math]l_1 = 3.1[/math] which is a rational number [math]\dfrac{31}{10}[/math], then take [math]l_2 = 3.14[/math] which is a rational number [math]\dfrac{314}{100}[/math], then take [math]3.141[/math], then [math]3.1415[/math] and so on. This is our “lower” sequence. Once that, lower, sequence is on the hook, we us it to feed off of to generate the “higher” sequence which, we carefully observe, must converge to [math]p[/math] but “be” a bit larger: for [math]h_n[/math] add [math]10^{-n}[/math] to [math]l_n[/math]:[math]h_n = l_n [/math][math][/math][math]+ \dfrac{1}{10^n} \tag*{}[/math]For example, [math]h_1 = 3.1 [/math][math][/math][math]+ 0.1[/math], [math]h_2 = 3.14 [/math][math][/math][math]+ 0.01[/math] and so on. We now can prove in epsilon-delta notation that the difference between [math]l_n[/math] and [math]h_n[/math] converges to zero as [math]n[/math] grows without bound.Once we understand where the sequences [math]l_n[/math] and [math]h_n[/math] came from, the remainder of the proof should be transparent. Since our function [math]L[/math][math]()[/math] is increasing, by construction:[math]L(r^{l_n}) < L(r^p) < L(r^{h_n}) \tag*{}[/math]and here is the place where our gradual (and not so elegant) accumulation of types of [math]p[/math] for which P4 holds earns its coin: since we previously proved P4 for rationals, we have the right to apply P4 to the left and to the right [math]L[/math][math]()[/math] above:[math]l_nL(r) < L(r^p) < h_nL(r) \tag*{}[/math]Divide the above inequalities by [math]L(r)[/math]:[math]l_n < \dfrac{L(r^p)}{L(r)} < h_n \tag*{}[/math]And now I will type in this needlessly long sentence to give you the pleasure of beating me to the punch because I am very absolutely ever so sure that you already have figured out what should be done next.That’s right: take the limit through the inequalities as [math]n \to +\infty[/math]:[math]\displaystyle \lim_{n \to +\infty}l_n \leqslant \dfrac{L(r^p)}{L(r)} \leqslant \lim_{n \to +\infty}h_n \tag*{}[/math]But since:[math]\displaystyle \lim_{n \to +\infty}l_n = p = \lim_{n \to +\infty}h_n \tag*{}[/math]then “the monkey in the middle” is:[math]\dfrac{L(r^p)}{L(r)} = p \tag*{}[/math]or:[math]L(r^p) = pL(r) \tag*{}[/math]QED
- Home >
- Catalog >
- Miscellaneous >
- Sample Interview Questions >
- Sample Interview Questions And Answers >
- Unit 4 L 1 Math 8