Level 2 Strange Maths: Fill & Download for Free

GET FORM

Download the form

The Guide of drawing up Level 2 Strange Maths Online

If you are looking about Modify and create a Level 2 Strange Maths, here are the simple ways you need to follow:

  • Hit the "Get Form" Button on this page.
  • Wait in a petient way for the upload of your Level 2 Strange Maths.
  • You can erase, text, sign or highlight through your choice.
  • Click "Download" to preserver the documents.
Get Form

Download the form

A Revolutionary Tool to Edit and Create Level 2 Strange Maths

Edit or Convert Your Level 2 Strange Maths in Minutes

Get Form

Download the form

How to Easily Edit Level 2 Strange Maths Online

CocoDoc has made it easier for people to Fill their important documents by online website. They can easily Customize through their choices. To know the process of editing PDF document or application across the online platform, you need to follow these simple ways:

  • Open CocoDoc's website on their device's browser.
  • Hit "Edit PDF Online" button and Upload the PDF file from the device without even logging in through an account.
  • Add text to your PDF by using this toolbar.
  • Once done, they can save the document from the platform.
  • Once the document is edited using online browser, the user can export the form according to your ideas. CocoDoc promises friendly environment for implementing the PDF documents.

How to Edit and Download Level 2 Strange Maths on Windows

Windows users are very common throughout the world. They have met lots of applications that have offered them services in managing PDF documents. However, they have always missed an important feature within these applications. CocoDoc intends to offer Windows users the ultimate experience of editing their documents across their online interface.

The procedure of modifying a PDF document with CocoDoc is simple. You need to follow these steps.

  • Pick and Install CocoDoc from your Windows Store.
  • Open the software to Select the PDF file from your Windows device and continue editing the document.
  • Fill the PDF file with the appropriate toolkit presented at CocoDoc.
  • Over completion, Hit "Download" to conserve the changes.

A Guide of Editing Level 2 Strange Maths on Mac

CocoDoc has brought an impressive solution for people who own a Mac. It has allowed them to have their documents edited quickly. Mac users can fill forms for free with the help of the online platform provided by CocoDoc.

To understand the process of editing a form with CocoDoc, you should look across the steps presented as follows:

  • Install CocoDoc on you Mac in the beginning.
  • Once the tool is opened, the user can upload their PDF file from the Mac simply.
  • Drag and Drop the file, or choose file by mouse-clicking "Choose File" button and start editing.
  • save the file on your device.

Mac users can export their resulting files in various ways. They can download it across devices, add it to cloud storage and even share it with others via email. They are provided with the opportunity of editting file through various methods without downloading any tool within their device.

A Guide of Editing Level 2 Strange Maths on G Suite

Google Workplace is a powerful platform that has connected officials of a single workplace in a unique manner. While allowing users to share file across the platform, they are interconnected in covering all major tasks that can be carried out within a physical workplace.

follow the steps to eidt Level 2 Strange Maths on G Suite

  • move toward Google Workspace Marketplace and Install CocoDoc add-on.
  • Attach the file and Push "Open with" in Google Drive.
  • Moving forward to edit the document with the CocoDoc present in the PDF editing window.
  • When the file is edited ultimately, download it through the platform.

PDF Editor FAQ

How many ways are there to sort [math]m\cdot n[/math] people into [math]m[/math] groups of [math]n[/math] people each?

Thinking generically we model people with objects or items and we model groups (of people) with buckets or boxes.With the above transition in hand the vague objective then is to describe mathematically the distribution of objects across buckets.Objects can be either anonymous (or indistinguishable) or they can be distinct. And so are the buckets.Since we are not told explicitly which ones are which, we shall take it here that all the objects or all the items are distinct and we shall consider the distinct buckets first and indistinguishable buckets second.It may seem that we are now ready to tighten or formalize our requirement as follows:find the number of ways to distribute [math]n[/math] distinct object across [math]k[/math] distinct bucketsbut there is a reason why combinatorics is often labelled as a low constraint discipline since we have to be very precise in our description of the problem statement because the unfolding solution path is very sensitive to such a statement.We thus postulate the following requirements:the distribution rule must be given ahead of timeeach item must wind up in some bucketmultiple items per bucket are allowedthe order of items within the bucket is irrelevantthe concept of order of items across the buckets does not applyAssume that we have an input set of [math]n = 5[/math] (distinct) items:[math]S = \left\{A, B, C, D, E\right\} \tag*{}[/math]and [math]k = [/math][math]2[/math][math][/math] distinct buckets denoted [math]b_1[/math] and [math]b_2[/math].The sample distribution rule - place [math]r_1 = [/math][math]2[/math][math][/math] items into [math]b_1[/math] and [math]r_2 = 3[/math] items into [math]b_2[/math] - ensures that, as required, it is the case that [math]2+3=5[/math] meaning that [math]r_1+r_2 = n[/math] and [math]k[/math] is equal to the number of the summands [math]r_j[/math].The first event of placing [math]2[/math][math][/math] items into [math]b_1[/math], combinatorially, amounts to the selection of [math]2[/math][math][/math] distinct items from the initial pool of [math]5[/math] items which can be carried out in[math]\displaystyle \binom{5}{2} = \dfrac{5!}{2!(5-2)!} \tag{1}[/math]ways:[math]AB \tag*{}[/math][math]AC \tag*{}[/math][math]AD \tag*{}[/math][math]AE \tag*{}[/math][math]BC \tag*{}[/math][math]BD \tag*{}[/math][math]BE \tag*{}[/math][math]CD \tag*{}[/math][math]CE \tag*{}[/math][math]DE \tag*{}[/math]Note that as promised (or as required) we are ignoring the internal order of the selected items inside the bucket [math]b_1[/math].Once the above [math]2[/math][math][/math] items have been selected, there will be [math](5 [/math][math]-[/math][math] [/math][math]2[/math][math])[/math] items left in the pool.The second event of placing [math]3[/math] items into [math]b_2[/math], combinatorially, amounts to the selection of [math]3[/math] items from the remaining [math](5 -2)[/math] which can be carried out in[math]\displaystyle \binom{(5-2)}{3} = \dfrac{(5-2)!}{3!(5-2-3)!} \tag{2}[/math]ways:[math]CDE \tag*{}[/math][math]BDE \tag*{}[/math][math]BCE \tag*{}[/math][math]BCD \tag*{}[/math][math]ADE \tag*{}[/math][math]ACE \tag*{}[/math][math]ACD \tag*{}[/math][math]ABE \tag*{}[/math][math]ABD \tag*{}[/math][math]ABC \tag*{}[/math]Note that again as promised (or as required) we are ignoring the internal order of the selected items inside the bucket [math]b_2[/math].Observe that combinatorially the first and the second events are independent of each other. There is a connection between these two events in the sense that the selection of the concrete items for the first event removes these items from the pool and, thus, makes them unavailable for the consequent choices.But we are dealing here with the items available in a fixed (and, thus, finite) supply - the removal of the items after the first event is not compensated with similar items taken from elsewhere.Since the Multiplication Counting Principle (MCP) is only concerned about the number - not the content - once the numbers [math]r_1 = [/math][math]2[/math][math][/math] and [math]r_2 = 3[/math] are fixed ahead of time, the outcome of the fist event does not affect the number of outcomes of the second event.Thus, by MCP the above two events can occur in sequence in a total of[math]\dfrac{5!}{2!(5-2)!}\times \dfrac{(5-2)!}{3!(5-2-3)!} \tag*{}[/math]ways. Here we did not do any actual computations on purpose - to show that the common term [math](5 [/math][math]-[/math][math] [/math][math]2[/math][math])![/math] will cancel out. If we denote the answer as [math]P(5, [/math][math]2[/math][math], 3)[/math] then:[math]P(5, [/math][math]2[/math][math], 3) = \dfrac{5!}{2!\cdot 3!\cdot 1} \tag*{}[/math]because[math](5 [/math][math]-[/math][math] [/math][math]2[/math][math] [/math][math]-[/math][math] 3)! = (5 [/math][math]-[/math][math] ([/math][math]2[/math][math] + 3))! = (5 [/math][math]-[/math][math] 5)! = 0! = 1 \tag*{}[/math]The above argument can be generalized in a straightforward manner:if [math]n[/math] distinct objects are distributed across [math]k[/math] distinct buckets [math]b_1, \ldots , b_k[/math] in such a way that [math]r_1[/math] items are placed into [math]b_1[/math], [math]r_2[/math] items are placed into [math]b_2[/math] and so on until [math]r_k[/math] items are placed into [math]b_k[/math] and[math]\displaystyle n = \sum_{j=1}^kr_j \tag{3}[/math]then the total number of such distributions [math]P\left(n, r_1, \ldots, r_k\right)[/math] is[math]P\left(n, r_1, \ldots, r_k\right) = \dfrac{n!}{r_1!\cdot r_2!\cdot\ldots\cdot r_k!} \tag{4}[/math]which is also known as the multinomial coefficient.The proof of the above statement essentially amounts to the recitation of our previous reasoning in generic terms: there are a total of[math]\displaystyle \binom{n}{r_1} = \dfrac{n!}{r_1!\cdot\left(n-r_1\right)!} \tag*{}[/math]ways to choose [math]r_1[/math] distinct items from [math]n[/math], there are a total of[math]\displaystyle \binom{n-r_1}{r_2} = \dfrac{\left(n-r_1\right)!}{r_2!\cdot\left(n-r_1-r_2\right)!} \tag*{}[/math]ways to choose [math]r_2[/math] distinct items from [math]n-r_1[/math], there are a total of[math]\displaystyle \binom{n-r_1-r_2}{r_3} = \dfrac{\left(n-r_1-r_2\right)!}{r_3!\cdot\left(n-r_1-r_2-r_3\right)!} \tag*{}[/math]ways to choose [math]r_3[/math] distinct items from [math]n-r_1-r_2[/math] and so on.The total number of ways to place [math]r_1[/math] distinct items into [math]b_1[/math] AND [math]r_2[/math] distinct items into [math]b_2[/math] AND [math]r_3[/math] distinct items into [math]b_3[/math] and so on, by MCP, is[math]\dfrac{n!}{r_1!\cdot\left(n-r_1\right)!}\cdot\dfrac{\left(n-r_1\right)!}{r_2!\cdot\left(n-r_1-r_2\right)!}\cdot\dfrac{\left(n-r_1-r_2\right)!}{r_3!\cdot\left(n-r_1-r_2-r_3\right)!} \cdot \tag*{}[/math][math]\dfrac{\left(n-r_1-r_2-\ldots-r_{k-1}\right)!}{r_k!\cdot\left(n-r_1-r_2-r_3-\ldots -r_k\right)!} \cdot \tag*{}[/math]In our rendering the compound factorials of the shape [math](n-r_1-r_2)![/math] and such are present in both the denominator and the numerator of all the fractions and will, thus, diagonally, cancel out:The compound factorial in the denominator of the last fraction amounts to a unity:[math]\displaystyle \left(n [/math][math]-[/math][math] r_1 [/math][math]-[/math][math] r_2 [/math][math]-[/math][math] \ldots [/math][math]-[/math][math] r_k \right)! = \left(n [/math][math]-[/math][math] \left(\sum_{j=1}^kr_j\right)\right)! = 0! = 1 \tag*{}[/math]as per (3).Consequently, if the buckets are indistinguishable then by the Division Counting Principle we reduce the answer in (4) by [math]k![/math]:[math]\dfrac{P\left(n, r_1, \ldots, r_k\right)}{k!} = \dfrac{n!}{r_1!\cdot r_2!\cdot\ldots\cdot r_k!\cdot k!} \tag{5}[/math]since [math]k![/math] is the number of linear permutations of [math]k[/math] distinct items.In your case we have especially convenient numbers - we seek the number of ways to distribute [math]m\cdot n[/math] distinct items across [math]m[/math] distinct buckets where exactly [math]n[/math] items are placed into each bucket.From (4) the answer in that case is:[math]\dfrac{(m\cdot n)!}{\left(n!\right)^m} \tag{6}[/math]because in this case each [math]r_j = n[/math] and there are exactly [math]m[/math] of them:[math]\underbrace{n!\cdot n!\cdot n!\cdot\ldots\cdot n!}_{m} = \left(n!\right)^m \tag*{}[/math]If, however, the groups of people are indistinguishable (which in my humble and subjective opinion is, in vacuum, more likely to be the case) then the answer is:[math]\dfrac{(m\cdot n)!}{\left(n!\right)^m\cdot m!} \tag{7}[/math]Note that from the problem-solving perspective we solved a more general problem first and then descended, in a way, to a simpler or a specific case.To do that we used the following approach - we took a concrete problem and abstracted it away. Abstraction can be thought of as a mechanism of separation of irrelevant minutiae from the essence of the phenomenon at hand.Abstraction is a concept that is used much not only in (of course) mathematics but also in computer science, computer programming and physics. It turns out that we live in a rather peculiar universe - it often happens that after we come up with an abstraction based on just one specific case, later on there pop into existence other, seemingly totally disconnected domains of knowledge, for which our abstraction can be used as a template.For example. Here we switched from people and groups to items and buckets.Splendid. Now what?To demonstrate the deep connection between mathematics and physics - a point of frequent inquiries here on Quora - consider the three popular models known as Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac statistics which study the behavior of systems comprised of a large number of weakly interacting particles in thermal equilibrium.In all three statistical models the role of items is played by the idealized particles and the role of buckets is played by the amount of energy these particles possess. The energy levels are taken to always be distinct while the particles may or may not be distinct.All three statistics aspire to answer the following question:if some particle is selected at random then what is the probability that it will have a specific (allowed) amount of energy?In Maxwell-Boltzmann statistics the particles (gas molecules) are classical and thus possess the fundamental property that we call a trajectory - some curve, [math]\vec{r}(t)[/math], which is representable as a real-valued function of a real-valued argument at least twice differentiable. We hope that we can recover such a function by solving the equations of Newton/Lagrange/Hamilton.The particles in MB-statistics are taken to be distinct in the sense that we can number them and track them over time. It follows then that in MB-statistics we are dealing with the distributions of distinct items across distinct buckets. Here multiple particles are allowed to have the same energy level - be in the same bucket.This is neither the time nor the place for all the technical details but, in general, to find the number of all the possible distributions of items in this case we would combine the number of possible states across the buckets given by our number [math]P\left(n, r_1, \ldots, r_k\right)[/math] with the number of possible states within a single bucket given by [math]n^r[/math].In quantum mechanics things get very strange very fast and the concept of trajectory flies out the window.In Bose-Einstein statistics multiple particles (photons, atomic nuclei, atoms with an even number of elementary particles or, collectively, bosons) are allowed to have the same energy level - be in the same bucket. Since these particles are indistinguishable, what matters is not what kind of particles are in any given bucket but rather how many of them are in it. The number of possible arrangements of this type is given by the number of combinations of a multiset with infinite supply:[math]\dfrac{(n+r-1)!}{r!(n-1)!} \tag*{}[/math]In Fermi-Dirac statistics multiple particles (electrons, neutrons, protons or, collectively, fermions) are not allowed to occupy the same energy level. Here any given bucket can have no more than one particle. This is known as the Pauli Restriction or Pauli Exclusion.As such, at any given time the bucket is either empty or contains exactly one item. It follows then that in this case the number of possible distributions is equal to the number of ways in which it is possible to distribute [math]r[/math] indistinguishable items across [math]n[/math] distinct buckets given the binomial coefficient:[math]\displaystyle \binom{n}{r} \tag*{}[/math]In all three statistics the resulting numbers contain a factorial of the total number of particles and that factorial, as you might imagine, will normally be enormous. And if the factorial is enormous then its Sterling's approximation works well - that is how in broad strokes (overlooking some number of gory details) the relevant formulas are deduced.

What would mathematicians do if someone proved that it is not possible to create a consistent system of axioms?

Godel’s incompleteness theorem was proved specifically for an axiom system that includes addition, multiplication, numbers, and more than that too, that those all have to have particular properties. And mathematicians generally believe them to be consistent, At any rate, there are many other complete, decidable and consistent axiom systems. Let’s unpack this in more detail.Kurt Gödel who proved the remarkable incompleteness theorems. Turing and Church proved related theorems at around the same time.There are many such systems of axioms but one of the easiest to state is the system of Peano axioms. It consists of some niggly axioms you have to add to define the operations of equality and adding 1. Then you also say that there’s a first number 0 (this is an axiom system for the non negative numbers only).Then it has one very special axiom, the “induction axiom”. If a property is true of 0, and if you can prove that whenever it is true of a number n, it is also true of n+1, then this property must be true of all numbers.This is the axiom that cause all the trouble. It lets you define addition, multiplication, and indeed what it means to take one number to the power of another one, e.g. 2^3, 5*7, etc etc. You can go on and prove many complicated theorems. For instance, you can prove that every number has a unique prime factorization. E.g. 30 = 2*3*5 where 2, 3 and 5 are all prime (are not divisible by any other prime numbers). And this is the only way you can express 30 as a product of primes. That’s the same for all numbers, they can only be expressed as a product of primes in one way.Those were the main properties of numbers that Godel used to prove his theorem, together with frequent use of proof by induction. Once you have that much deductive power, then yes, you can’t prove that the resulting system is consistent.So - that’s a big chunk of mathematics. We use numbers almost everywhere. But there are many areas of maths that don’t need numbers, or don’t need numbers with all that apparatus of deduction rules to prove things about them.One excellent example here is geometry using rulers and compasses. Euclidean geometry. Euclid worked out the basic axiomatization, though he left out some things that seemed obvious to him, so obvious he didn’t realize they needed an axiom. One of the rules he left out is that if a line enters a three sided triangle, crossing one of its sides, it has to exit the triangle by crossing one of the other sides or the opposite vertex. It was so obvious that even with his careful logical mind, he didn’t realize he had it as an assumption.Anyway - if you have a proper axiomatization of geometry - well there’s no mention of numbers there. And it turns out, not only that Godel’s theorem doesn’t apply, but if you are careful in how you set out your axioms, you can also prove that the resulting theory is consistent, decidable and complete. You can use Tarski’s axiomatization of geometry to prove this.What’s more, even addition and multiplication are not enough for Godel’s theorem. They have to have very special properties, with quite a powerful deduction system. One nice system that you can prove to be consistent is the theory of real closed fields.This is a theory with addition, multiplication, the numbers 1 and 0, fractions, so you can use it to express any ratio like 2/3, 4/5 etc. Also it’s “closed” meaning that given any sequence of numbers in it, then the limiting point of that sequence is also in it. In particular it’s going to include every infinite decimal, such as PI as the limiting point of 3, 3.1, 3.14, 3.141, 3.1415, …It also has to have the <= relation which also has to work just as it does on the real numbers - given any pair of numbers, one is always smaller or equal to the other. But as well as that it has to have a total ordering. Given any pair of numbers, either they are identical or one of them is smaller than the other, and that if a<=b and b<=a then a = b, Also if a<=b and b<=c then a<=c. In short, it’s a total order.Well - that may seem very similar to what you get with Peano’s axioms - after all we now have perfectly good numbers we can use for addition and multiplication. But - it turns out we just don’t have the same deductive power we have for Peano’s axioms. We couldn’t, for instance, use these rules to define exponentiation, primality, unique factorization etc because it doesn’t have an induction rule for the numbers. So we can’t prove Godel’s theorem for this theory either.Well it turns out, if we add two more axioms, an axiom that it includes the square root of any number, and an axiom that it includes at least one solution of any polynomial equation of odd degree (linear, cubic, quintic etc) - then we can prove that the resulting theory is complete, consistent and decidable - that given any postulate you can state within the theory, then it is either true or false, and what’s more, there’s a procedure you can follow that is guaranteed to find the right answer.Now - this process for finding the right answer to any question in the language is not very practical (nor is it practical for Tarski’s geometry either). The procedure is immensely complex and might take pretty much for ever on human timescales. Nevertheless, in the sense of Godel’s theorem, it’s decidable, consistent and complete.So - now we’ve seen some rather powerful theories that are decidable, consistent and complete. So now let’s look at the opposite. Something that seems like a very weak theory, yet, it’s enough to prove Godel’s theorem, so it can’t be shown to be a consistent theory.This is Robinson arithmeticThe axioms are, where “successor” here means the result of adding 1:0 is a number and is not the successor of an numberIf the successor of x equals the successor of y, then x = y.Every number except 0 has a predecessor.Adding 0 to a number has no effect: x+0 = xMultiplying any number by 0 gives 0: x * 0 = 0.Then, we have a couple of rules to define addition and multiplication. So, if S is the successor operation, then for every x and y:x + Sy = S(x+y)x . Sy = (x.y) + xIt hardly seems enough. Surely the resulting theory is logically weaker than the theory of real closed fields? Well, it turns out, the answer is no. There is enough deductive power here to deduce Godel’s theorem.This theory therefore can’t be proven to be consistent except in a “stronger or equally strong theory”.Now, this doesn’t mean that it is inconsistent. Indeed, it doesn’t actually rule out consistency proof. Indeed Gentzen did a consistency proof for Peano arithemtic, using a slightly different and in some ways simpler axiomatization of arithmetic called “primitive recursive arithmetic” plus something called “transfinite ordinals”. All of this is very techy, but mathematicians find this reassuring because Peano arithmetic with its induction axiom seems a bit uncomfortably like the powerful theory of Frege’s axiomatization of set theory, but this other axiomatization is in a way more straightforward. So it’s good evidence, perhaps, that we won’t get into any trouble by treating Peano Arithmetic as a consistent theory, even though we can’t prove this. Gentzen's consistency proof - WikipediaRather, I think a better way of looking at it, by Godel's incompleteness theorem, any powerful enough system of axioms can never capture all the maths that is implied by those axioms.If you've described it clearly enough so that it's totally clear how theorems are proved - then by Godel’s ingenious methods, that leaves it open to processes of adding new axioms to the system, which a mathematician can see must be true, but which are not included in your original list of axioms. It also leaves you open to the possibility of adding in the negations of those new axioms as well of course, if you want to explore systems that are omega inconsistent -if you are interested in these strange theories, where you can say “There is a number with property P” and yet can say “1 doesn’t have property P, nor does 2, nor 3, nor 4, nor 5, …” and every single statement of that sort is false. It seems inconsistent, but actually mathematically it isn’t possible to make a finite proof of an inconsistency. So it’s a bit weaker than normal inconsistency, and the techy word for this is “ω inconsistent” (there ω is a symbol used for the sequence of all the numbers 1 2 3 …. - it’s consistent unless you string together infinitely many statements which we can’t do in practice - so you can never prove it is inconsistent)So anyway - you are not prevented from exploring those strange theories either in full knowledge of what you are doing. Indeed some mathematicians are also interested in “paraconsistent” theories - theories that have proofs of inconsistencies that are finite, but rather large, so you can work with them for a long time without hitting an inconsistency. These are not unlike ordinary logic that we use in everyday life. We are actually able to work perfectly well with inconsistent theories. E.g. - if you want to take something out of your house, you might think it is small enough to fit through the door upright, and actually try to take it out that way, only to find out it won’t fit. This means you had an inconsistent theory about that object. No problem. Modify your theory to say “oh I get it now, it has to be turned on its side” and now you can take it out of your door. In more complex situations you may work with beliefs or ideas you know to be inconsistent, but just avoid the situations where you have to face the inconsistencies. It might work just fine. In some areas, e.g. law, you may have to work with case decisions that are inconsistent with each other, and try to find a way through the situation. Basically we’d be hugely handicapped in our everyday life if we had to work only with consistent sets of ideas. So - sometimes it’s interesting to work with inconsistent sets of axioms in maths too.So you can do all that. And you can also work with consistent theories of course. Or at least, theories that you have every reason to believe are consistent even though you can’t prove this.Just use the axioms you can see to be true of numbers, based on Peano’s axioms, and then Godel’s sentence - see that it’s true - and so unprovable. Add it as an axiom. Just keep going. And you can then expand your axiom systems as needed in creative ways if you need to go beyond their limitations. Just on and on, as much as you like.So - it means maths has to be creative, and never ending. And we can never know for sure that it is consistent once it reaches a certain level of complexity and power. But it doesn't have to be inconsistent.Yes, there is a case of “once bitten twice shy” that when Frege published his life’s work, a foundation for all of set theory, then Russell found a mistake in it, he proved that it was inconsistent through Russell’s paradox. It could be fixed, but only in clumsy ways. This suggests that it’s not as easy to come up with a theory that you know for sure is consistent as one might think. Indeed Godel’s theorem shows we can’t prove that even the Peano axioms are consistent.But I think most mathematicians would say that there is no real risk that the axioms are inconsistent in the sense of Frege’s set theory. We don’t need to worry that some ingenious fellow will pop up, like Russell did for Frege, and say “look, here is a proof of an inconsistency from the axioms of Peano arithmetic”. Most mathematicians would say we don’t need to worry about that possibility.They are just impossible to encapsulate completely in an axiom system that captures all their properties. That’s why they aren’t decidable, are incomplete, and can’t be proven to be consistent.Now, I should mention Gödel's completeness theorem because this confuses many people. It’s a different notion of completeness from the one used in Godel’s “incompleteness theorem”. It’s about the formal consequences of the axioms, not the informal consequences that you can see by reasoning about the theory in a meta mathematical way.It says that if you enumerate all the proofs that you can make using the axioms of the theory - those are all the results you can deduce from the axioms and the only results you can deduce. There are no hidden extra axioms you need to add to “complete” the theory - it’s all there.Put another way, then it says that if you interpret truth as “true in every possible model” then all such truths can be deduced from the axioms of your theory.I should say - this is stuff I researched into back in the 1980s. I haven’t done any work on it at all since then, and if you asked me to go into details about some of these things, I’d have to “look it up and get back to you”. But hopefully it gives a reasonable idea of how it works.It’s rather baffling, for sure. But I think the best way to look at it myself is that Godel proved that mathematics can’t stagnate and that there is need for endless creativity as it can’t be “cut and dried and diced” into a single overall theory that encompasses everything.But some theories, notably geometry, can be made completely decidable, consistent and complete.Also all the problems here are to do with infinity. If you are studying something finite, for instance a finite group, with a finite number of elements, so all questions can be answered by doing finite, if very complicated calculations - then that’s going to be a complete, consistent and decidable theory too. It’s just when you start having infinity come in in some way, that you may (but not always) hit this.You might also be interested in my Russell's paradox

Why exactly are singularities avoided or “deleted” in physics?

Consider first that from a general system theoretical viewpoint, both mathematics and physics basically define a set of elements (a taxonomy), and determining their relationship with themselves (properties), and relationships with other elements (interactions).The root of finite in “define” is key here—can you actually define infinity? Is infinity something that you can “definitely” reach? Analysis of relationships, of course, have the same finite/infinite questions.Concerning the specifics of these, there is a fundamental difference (ignoring some specific exceptions):Mathematics strives for precision in the definition of elements and exact analysis concerning their relationship ab initio. Its work begins with a statement of axioms (for this discussion we’ll avoid the technical difference of postulates), from which theorems are derived.Physics seeks to evolve a mathematical model of fundamental elements and their relationships that describe the universe as believed most objectively sensed (I purposely suggest something more fundamental here than “measured by calibrated instrumentation, in a repeatable manner, by multiple observers” —something borne out by modern physics itself).These differences will reflect how infinity is treated—but in both, there is a willingness to sacrifice an idea of actual reality for a pragmatic model that we can work with within our conceptual and sensory limits. The problem comes when this model is believed to be Reality.Returning to math, till the mid-19th century, axioms were considered self-evident truths. The discovery of consistent nonEuclidean geometries (especially when in the 20th century these were shown to be even a better description of what was objectively observed), led to grave doubts about the notion of “self-evident.” Due to further revelations later into the 19th century, particularly the 1872 publication of the Weierstrass function where an infinite process permitted an “impossible” function. [Everywhere its limit was defined—you could calculate a value f(x) for every x. However, nowhere was it differentiable—no tangent to a point existed as between any two points, there was an infinite number of points where the function was greater or less than either point.]It became clear that what was known as “the Great Crisis of Mathematics” was mainly due to the notion of infinity treated as a number, and the use of infinite processes.The mathematics program around actual infinity—not mere formalisms like division by infinity, and the limit process in Calculus, became its elimination by separating it out from the finite and enclosing it in finite packaging. This is done essentially by three means:“Undefined” — Such matters of actual 0/0 or inf./inf. are so considered, or in particular, an infinite function value within a finite range—an accessible point x where a certain f(x) becomes infinite—known as a “singularity”.“Transfinite Set” — All equivalent descriptions of infinity per “Cantor’s diagonalization method” are in such a set, treated like a cardinal number. These transfinite “numbers” are their own world of abstraction and have no direct bearing on matters outside of pure mathematics, and are completely separated from finite mathematics.“Axiom of Choice” — In a nutshell, this axiom admits an infinite (nonconstructive) process, while it opposite (also used in some systems) denies such. When used in a finite context, this has brought about paradoxical—results. The example par excellence is the Banach-Tarski paradox. [24:13]It is noteworthy that Cantor’s diagonalization really requires the axiom of choice because he does require an infinite process. Further, as proven in the 20th century by Paul Cohen, the continuum hypothesis—Cantor’s hypothesis that there is no transfinite cardinal—infinite set—between his zeroeth and first, also requires the axiom of choice. Similarly for the “general continuum hypothesis” across all transfinite numbers.Now, while physics models mathematically, it does not always “dot the i’s and cross the t’s.” A classic (and we’ll see, crucial) example of this regarding infinity can be taken from Einstein’s discussion on Euclidean and non-Euclidean continuum in Relativity, The Special and General Theory (1920), XXIV:The surface of a marble table is spread out in front of me. I can get from one point to a ‘neighboring’ one, and repeating the process a (large) number of times, or, in other words, by going from point to point without executing ‘jumps.’ I am sure the reader will appreciate with sufficient clearness what I mean here by ‘neighboring’ and by ‘jumps’ (if he is not too pedantic). We express this property of the surface by describing the latter as a continuum.The key expression here is “not too pedantic”. A true (mathematical) continuum is infinitely dense, and going from point-to-point ultimately involves an infinite process—and in fact, physically impossible as already realized by the philosopher Zeno of Elea 2 1/2 thousand years ago.To more generalize the issue, the physical world—the world perceived by our senses (according to the mathematician Kurt Gödel, one of the greats of the 20th century, the only source of axioms that can be taken as “true” in any real sense, is determined by sensory perception)— is finite. Simply put, arithmetic logic does not apply to the infinite—ergo infinity has no rules, and a legal universe can’t contain it.The first implication is that Einstein’s continuum is, so-to-speak, digital, not analog. That is, it consists of pixels.According to the Holographic Principle derived from studying black hole entropy, they are actually “ ‘2D’ pixels,” of Plank length square area on the boundary of the known universe—the radius from us where the outwards expansion of spacetime reaches the speed of light. [2:59], [55:26]Consider now the photon, that has exactly zero rest mass and travels at exactly at the speed of light that in its “reference frame” travels “through the zero depth of a plane in zero time.” Now, what does that mean? Well, what does relativity say in the limit of light speed? The photon is a zero rest mass, not an approach to it. The Lorentz transformation, therefore, says that m’ = 0/0. t’ = 0, and l’ = 0.In mathematics, this is called undefined. In physics its called, “It could be anything—the model has broken down.” So what about t’ = 0, and l’ = 0? Where is it's zero length in the zero depth at this particular moment in zero time? You can’t ask such a question in physics, so rather let's get our heads back into the quantized finite and let quantum mechanics answer these questions.Firstly, the mass choose. Okay, but what about where the photon, where is it's zero length in the zero depth at this particular moment in zero time? Actually nowhere and everywhere, right? Quantum mechanics (and proven definitively by a large body of experiments), basically agrees.Photons have no real properties until they are measured. In fact, if two photons where generated originally—even a billion year ago and a billion light years away, they share information once measured—instantly. This would imply that while they are apart, they also never left their point of origin, or the measurement caused a time reversal or the spacetime interval since generation is actually zero—however you’d like to model it. [Another thought to intrigue if you’ve watched the videos above on the Holographic Principle, and particularly the second with Leonard Susskind, is if the photon with its zero time embedding in a “plane” is not just seeing more clearly the “holographic plate” (to use the direct analogy of the kind of holograms we use in everyday life)?]One more point: Consider the birth and termination of a photon—let’s say through an atomic, molecular covalent, or crystal band electron emitting a photon to make a “quantum jump” to a lower energy state, or absorbing one to enter a higher energy state, respectively.[For reference, this happens many times when light travels through a transparent material like air, more so in water, and generally still more so in clear solids. Photons are so absorbed and emitted about every mean free travel time. Though they travel at the vacuum speed of light during free travel, the delay of absorption and re-emission renders the effective speed of light in nonvacuum environments slower than in vacuum.]Question: Does this absorption or emission happen instantaneously, and if not, could we have the existence of analog-disappearing (or appearing) photons coming towards (or moving away) with instantaneous acceleration up to (or down from) light speed?The answer is famously strange in the digitized quantum sense of things, but essentially, the absorption is digitally instantaneous but analog spread in time. This is due to the energy-time version of the Heisenberg Uncertainty Principle. Actually, “uncertainty” is a misnomer—the proper term would be “indeterminacy”. It is not that we don’t know that exact instant of the event, it that that instant isn’t any particular one during the possible time period—it is a superposition of all of them. So, if one would, the photon is similarly, digitally as it were, instantaneously accelerating and whole. It is as though there is a digital reality embedded in a deeper analog reality, It appears, as it were, a character in a virtual reality video game being played by a more real player sitting in front of a console.How deep does the rabbit hole go? Pursuing this idea will be our final stop, but we’ve yet the really big infinity controversies of modern physics to get to first, the existence of infinite density point masses—physical singularities.In mathematics, as discussed above, a singularity is “undefined” as it is—after all—not finite. However, to any level of approximation considered sufficient—e.g., in the physical sense, to within measurement error—up to within any radius of x, there are limitless other functions that could replace f, such that f(x) is finite.There are two types to consider, subatomic particle point masses, and cosmic singularities.The first led to nightmarish infinity paradoxes till apparently solved through adding dimensions via string theory.As for cosmic singularities, the question is if general relativity breaks down into some more general rule, perhaps a quantum gravity theory, or if something along the lines of the Penrose Cosmic Censorship Conjecture applies. This later requires an “event horizon” a point approaching a singularity where even light cannot escape, thus separating the singularity from the outside universe in terms of causality (which would paradoxically cease to exist beyond the future where the singularity would be met).To the outside universe, anything falling towards the singularity appears to compress and slow in time so as to never actually reach even the event horizon, much less the singularity. This is effectively moving the singularity out to infinite time, eliminating the infinity paradox.What is particularly interesting is that the Holographic Principle discussed earlier, may well have combined cosmic censorship, quantum gravity, and uncertainty/entanglement.In essence, the proper view in physics is that we don’t know ultimately what reality is. In the words of the late Stephen Hawking (from a 1994 debate with Roger Penrose):…I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements….There is much serious consideration being given to the possibility that this world is merely a simulation. This does a very nice job with explaining in detail as much as possible to a lay audience, the pre-history and crucial quantum mechanics experiments up through the present decade, that leads to what seems its inevitable conclusion: [50:27]Perhaps a step further, or at least a variation on the theme is Simulation Theory 2017, We are waking up!!!!: [38:20]Here Donald Hoffman, a professor of cognitive science at the University of California at Irvine, presents a theory of all reality as being the entangled interaction of conscious agents--combining into singular entities of group consciousness--and hints, I believe, of deeper internalities of these… It is particularly noteworthy that according to Prof. Hoffman, as devoted a materialist as Richard Dawkins has agreed that even this version of the simulation hypothesis can work with a more abstract extension of the level of evolution.However, whatever the resolution of our floating digital world, it cannot be embedded like software in a greater digital world or that world is just a broader program. However, any legal world—one with physical laws following arithmetic logic—must be digital. Only an ultimately analog, ultimately infinite Reality—not bound by causality or even a probability distribution.To simply suggest a philosophical or even theological approach towards this will not be helpful—we can imagine or believe all we like, but what is beyond our perception will remain there. On the other hand, science as usual—as the late Stephen Hawking (and Kurt Gödel decades before him—and Bertram Russell before him) rightfully points out, can only offer us the most reasonable theory that we can resolve within the limits of our perception—whether that is a Gray’s anatomy of medical student’s reality or a stick figure of a kindergartener.Telescopes and microscopes must ultimately go through the filter of our eyes, and even if we join ourselves to artificial intelligence with the greatest virtual reality interface, this will not help with irreducibly complex patterns beyond our ability to perceive.This is not merely a stonewall to our curiosity about that which actually exists. Humanity is heading into a great deal of trouble due to these limitations—countless different systems from climate to basic relationships between people are approaching collapse. Approximately as Einstein is said to have said, it will take ten times the level of perception to solve the problems caused by what we’ve innovated at our present level.What I would like to suggest is a very different sort of science, actually, its most ancient root that I think has found its most crucial era for application. It is the science of consciousness and perception of Kabbalah, literally “Receiving”. Despite all the myths that have surrounded it, it is in essence, a repeatable and correlative science—only the basic “instrumentation” is internal. Even quantum mechanics admits that any reality of measurement is with the observer.Just how far can this ladder take us? We can’t really know. However, as a start, I highly recommend these:Perceiving Reality [9:10]2. Kabbalah Revealed, Episode 1 -- A Basic Overview [25:28]3. Kabbalah Revealed, Episode 2 -- Perception of Reality [25:15]

Why Do Our Customer Attach Us

I purchased CocoDoc PDFelements Pro and just a short while late the latest version came out. I wrote customer service and was told by "Helen" that I would get a free update. When I tried to update it wouldn't take my registration code and said I needed to pay for the upgrade. I wrote "Helen" again and this is what she said: Hello, I am sorry at that time the news was not released yet. So I am sorry that I think it is a free update, but it is a paid upgrade actually. Sorry for my misunderstanding at that time. If there is anything else we can do for you, please feel free to contact us. Sincerely Helen Seriously, CocoDoc???!!! You can and should do better than this! Shame on you for misleading customers!

Justin Miller