Investigating The Factors Affecting A Simple Pendulum: Fill & Download for Free

GET FORM

Download the form

How to Edit and sign Investigating The Factors Affecting A Simple Pendulum Online

Read the following instructions to use CocoDoc to start editing and signing your Investigating The Factors Affecting A Simple Pendulum:

  • To get started, seek the “Get Form” button and click on it.
  • Wait until Investigating The Factors Affecting A Simple Pendulum is ready to use.
  • Customize your document by using the toolbar on the top.
  • Download your customized form and share it as you needed.
Get Form

Download the form

An Easy Editing Tool for Modifying Investigating The Factors Affecting A Simple Pendulum on Your Way

Open Your Investigating The Factors Affecting A Simple Pendulum Immediately

Get Form

Download the form

How to Edit Your PDF Investigating The Factors Affecting A Simple Pendulum Online

Editing your form online is quite effortless. There is no need to install any software with your computer or phone to use this feature. CocoDoc offers an easy tool to edit your document directly through any web browser you use. The entire interface is well-organized.

Follow the step-by-step guide below to eidt your PDF files online:

  • Find CocoDoc official website from any web browser of the device where you have your file.
  • Seek the ‘Edit PDF Online’ button and click on it.
  • Then you will visit here. Just drag and drop the form, or upload the file through the ‘Choose File’ option.
  • Once the document is uploaded, you can edit it using the toolbar as you needed.
  • When the modification is done, click on the ‘Download’ button to save the file.

How to Edit Investigating The Factors Affecting A Simple Pendulum on Windows

Windows is the most widespread operating system. However, Windows does not contain any default application that can directly edit PDF. In this case, you can install CocoDoc's desktop software for Windows, which can help you to work on documents quickly.

All you have to do is follow the guidelines below:

  • Get CocoDoc software from your Windows Store.
  • Open the software and then upload your PDF document.
  • You can also select the PDF file from Google Drive.
  • After that, edit the document as you needed by using the various tools on the top.
  • Once done, you can now save the customized template to your computer. You can also check more details about how to edit a pdf PDF.

How to Edit Investigating The Factors Affecting A Simple Pendulum on Mac

macOS comes with a default feature - Preview, to open PDF files. Although Mac users can view PDF files and even mark text on it, it does not support editing. Utilizing CocoDoc, you can edit your document on Mac without hassle.

Follow the effortless steps below to start editing:

  • First of All, install CocoDoc desktop app on your Mac computer.
  • Then, upload your PDF file through the app.
  • You can attach the PDF from any cloud storage, such as Dropbox, Google Drive, or OneDrive.
  • Edit, fill and sign your paper by utilizing this amazing tool.
  • Lastly, download the PDF to save it on your device.

How to Edit PDF Investigating The Factors Affecting A Simple Pendulum with G Suite

G Suite is a widespread Google's suite of intelligent apps, which is designed to make your job easier and increase collaboration with each other. Integrating CocoDoc's PDF editor with G Suite can help to accomplish work effectively.

Here are the guidelines to do it:

  • Open Google WorkPlace Marketplace on your laptop.
  • Seek for CocoDoc PDF Editor and install the add-on.
  • Attach the PDF that you want to edit and find CocoDoc PDF Editor by choosing "Open with" in Drive.
  • Edit and sign your paper using the toolbar.
  • Save the customized PDF file on your cloud storage.

PDF Editor FAQ

What are some pros and cons of the communist state government in India's Kerala?

For many outsiders, this question is based on two false notions which I need to rectify firstKerala never had long term communist party rule unlike Bengal or Tripura. They could rule maximum 5 years and power alternates at end of every term of the government. So out of 61 years of modern Kerala history, Left front has ruled only 22 years (not more than 5 years in a stretch)Communist party or Congress party never able to rule alone in Kerala, barring 2 years for Undivided Communist Party when they formed the first government in 1957 which collapsed in its 2nd year. Ever since that, Kerala was always ruled by alliances, notably two alliances- United Democratic Front led by Congress and Left Democratic front led by CPM. In both fronts, there are left parties as well as non leftist parties. So all alliance rules Kerala based on a consensual agenda within the Alliance, not purely individual party decisions.These two points have a key role, as the question itself assume, there exist a Communist Government. Sorry, there is no Communist government in Kerala, rather Left Government, be it in past or today. And again, Communism in Kerala is not pure Communism. They are more like Social Democrats.Don't Call The CPM In Kerala Communists, Call Them Social DemocratsNow pros and consBasically there is little difference between UDF or LDF when comes to governance. As I said, since its coalition politics, no single party can implement its ideological agenda completely into governance, rather has to compromise a lot for coalition partners. UDF has lot of Leftist parties, hence Congress couldn’t implement absolute Neo-Capitalist model (Manmohan Singh concept of Liberalization/Privatization/Globalization policy) as well as LDF do have several centrist parties, thereby can’t implement pure Communist agendas.And both UDF and LDF are socialists to the core. Infact Congress party in Kerala (KPCC) has almost all its leaders who are hardcore socialists that they used fight against many policies which Congress unveiled since 1990s. So as Communists are much in sync with Chinese model, thereby have huge relationships and partnerships with several rich businessmen, thereby can’t oppose many capitalist policies. In short, all political parties in both alliances almost have similar ideologies and governance concept barring few minute differences.I am focusing only Pros and Cons of various Left Governments in Kerala from 1957 to present.ProsUndivided Communist Party unveiled Land Reforms concept in 1957, which was an absolute dynamic shift from traditional land holdings pattern. Overnight Kerala changed its image from feudalistic society to much more inclusive society with every landless citizen now got right to own certain minimum amount of land. Peasants who toiled for generations like a bonded labour in the land, suddenly became owners of few plots of the same land. The Landlord concept (Jenmi System or Zamindari system as called in North) disappeared. Though Congress party was opposed to it which actually brought communists down in 1959, they too were forced to accept this concept at later stage. Land reforms was a mixed bag. On one hand it enabled every landless citizen to feel ownership and thus threw themselves out of traditional feudal yokes, but caused heavy land fragmentation that killed agriculture in a big waySocial Development interventions: I won’t say Left Government as such, rather it was a policy of Communist party to expand the concept of literacy and rational movement, which got support of both LDF and UDF governments. This led to patronizing of various progressive societies, notably Kerala Sastra Sahitya Parishad (People’s Science Movement) to push for a scientific/rational revolution in Kerala.Total Literacy Campaign : In its first year, Communist Government embarked a massive literacy campaign. It focused heavily on taking Travancore/Cochin Model of education system to Malabar (which was part of British India and heavily lagged) and aggressively expanding public schools in Travancore-Cochin side aimed for mass enrollments. On other side KSSP and similar Left organization under patronage of communist parties and Left governments (read CPI which was part of UDF in 1970s) pushed for scientific movement. This heavily pushed for social literacy, massive public sanitation program, healthcare awareness programs, scientific rational building programs etc.Typical Village library in a remote village of Malappattam in Kannur. A good number of such libraries are developed by Communists or Leftist groups, due to which one can see Leftist iconography in such libraries.Granthashala Movement: There was a Library movement during Monarchy days in Travancore, which got a Pan Kerala face in 1954. The first Communist Government aggressively supported Kerala Granthashala Sanghom (Public Library Society) which pushed for an aggressive expansion of libraries across the state under state patronage (without state funding). During Monarchy days, libraries were mostly in cities and college areas, meant for elites. KGS under Left patronage (don’t read as government) changed the narratives for a concept of Pothu Granthashala concept (People’s Library) by which number of libraries rapidly expanded from 1780 to 4280 libraries across the state. In 1975, UNESCO’s prestigious Krupskaya Prize made KGS as a special mentioning. In 1987, LDF Government enacted Library Act, by which all libraries now became under Government patronage and funding mechanism. Under the Left govt Patronage from 1987 to 1991, number of Public libraries grew more than 6000, ie almost 6 libraries per village, nearly 25,000 reading rooms across Kerala, more than 20,000 literary clubs, making Kerala having highest number of libraries in the country. This played a key role in achieving literacy as well as enlarging political awareness. Even today, one hall mark of Kerala is seeing a Public library or reading room, even in most remotest village. And being a pro-Left legacy, its natural to see photos of Che Guevara and Gandhiji side to side decorating the walls. http://shodhganga.inflibnet.ac.in/bitstream/10603/40083/10/10_chapter3.pdfPublic Sanitation Campaign : While Travancore/Cochin Kingdoms historically formulated policies to push sanitation as one of its key agendas for development, Left Govt took from there in much more aggressive manner in a time, when it wasn’t much prioritized nationwide. The First Government in 1957 entered into a partnership with WHO for a sponsored scheme as part of a pilot project for rural sanitation that promulgated a single leach pit-type latrine with a squatting slab and the water seal bowl placed directly over the pit. The success of this scheme, resulted in subsequent governments to push for it. In 1988 under EK Nayanar government (LDF), Kerala government partnered with Dutch Government for a state wide latrine redevelopment scheme. By the time of exit of Nayanar govt in 1991, Kerala already reached 75% of its target plans of massive modern sanitation methods. This explains one major reason why Kerala was ahead of modern day Swach Bharat Abhayan programs. https://www.ircwash.org/sites/default/files/Kurup-1996-Community.pdfQuality Healthcare: Since the times of Monarchy, the governments focused in creating a community healthcare concept. And both LDF and UDF govts aggressively focused on developing Government Healthcare system. So definitely a good number of 3 tier hospital system (PHC, Taluk and General Hospital) got massively expanded since the times of first government. Communist Chief Minister- Achuta Menon (He was from CPI and that time part of UDF, not LDF) had a vision of quality tertiary hospitals in Kerala and focused in establishing Free India’s first Government owned Tertiary Cancer Center in Trivandrum. So as Kerala’s second Medical college in Kozhikode in 1957. Apart from Government level, Communist party focused on concept of Cooperative Medical facilities and established a series of healthcare institutions across Kerala (some coop hospitals are established by Congress) and these coop institutions got government support in a big way.The irony when Communists across India was opposing Computerization in late 1980s while in Kerala, they laid foundation to India’s first Tech ParkEstablishment of India’s first Technology Park (Technopark) in Trivandrum which was a vision of E.K Nayanar in 1991Community Policing- Janamaithri Concept: LDF can be best credited in launching India’s first Community Policing concept in 2009 during last LDF government. It was part of a major structural change to make Police to shed its anti-people people and make them friendly. This initiative got a national wide attention when featured in popular National show- Satyameva Jayathe. The scheme also brought several laurels to Kerala Police.Pushing PSUs to better profits margins: Left Always believe sustainability of PSUs which normally goes in loss during UDF term. Whenever Left govt comes to power, we could see most of loss making PSUs suddenly turning themself into profitable ventures.From Rs 131.6 crore net loss, Kerala's PSUs post Rs 106.91 crore net profit in 2 yearsA video presentation of Smart /digital classrooms of KeralaNadakkavu Govt Girls High School- Kozhikode is regarded as the best Model school in Kerala for its upgradation to International school levels which is now planned to extend to 100 major urban schools across KeralaUpgradation of Public Education sector : Left government always pushes for Public education, particular upgradation of Government schools, which normally doesn’t take priority for UDF governments. In every term when Left comes, they upgrades public education in a high priority. This time, Left Govt called the exercise as Puthuvidyabhasa Yajnam (The Holy Sacrifices of Public Education) by which the state set an ambitious target of converting 45,000 classrooms into Smart classrooms with all modern digital technologies like Digital boards, LED Screens, audio visual rooms, digital learning with higher school students using Notepads instead of notebooks etc. Already 20,000 clasrooms are in process of converting into Digital facilities and some are airconditioned with wifi facility. The focus is to create more schools at par with international schools that can attract middle class and upper middle class into Government schools. The result was seen when today (01/06/2018) Kerala recorded more than 3 Lakh new admissions to Government Schools for this academic year.20,000 classrooms in Kerala schools set to go hi-tech by Januaryhttp://www.newindianexpress.com/cities/thiruvananthapuram/2017/nov/28/sarkara-government-upper-primary-school-at-chirayinkeezh-opens-fully-air-conditioned-smart-classroom-1713217.htmlStrives in Electricity sector: Left government since EK Nayanar times, focused on building up installed capacity in the state and extending electricity connections to every home. This policy has got a continuation during UDF times since 1990s and by 2017, Kerala became fully electrified state with 99.2% of total homes having access to electricity.Kerala becomes first state to provide electricity to every home: Pinarayi VijayanSocial Justice: Left Government aggressively focused on social justice by accepting the concept rights of Transgenders and LBQT community. Though its a policy of UDF Govt initiated in 2014, Left Govt adopted it as part of their political manifesto and continued the UDF policy during their tenure. As part of this policy, Kerala opened India’s first Free Sex Change Surgery clinic in 2016, first Transgender school in India, Special skills training institute to upgrade the employable skills of TGs as well as employment of TGs in Kochi Metro. A Similar program is underway for prisoners in Kerala Jails who turned Jails into a money spinning industry. Again, this was a policy concept made by UDF govt, limited to ready to eat Food products like packed Chapattis etc. LDF Govt since 2016 took to much unbelievable concepts including starting of Beauty palours, Casual dining restaurants, Fashion Boutiques etc and planning to expand to new concepts like running Petrol pumps, developing furniture industry etc.9 Times Kerala Pioneered With Its Transgender Policies & Showed The World How It’s DoneState prisons reap profits as inmates script a success storyCurrent government has fasten the completion works of much delayed GAIL Pipeline, re-initiated 4 Lane works of National Highways and put works of decentralized waste management plans on stronger levels.Please don’t assume me as a Promoter of LDF. Just put the key points as above. Now lets come to the biggest disadvantages of LDF governance.ConsLeft Governments are never free from large amount of criticisms. Infact, its a fact, Left Govt in Kerala normally attracts large amount of Anti Incumbency factors from start of its governance itself due to attitude of many leaders. Due to this reason, rarely Left Government used to get Bye-elections within the state favorable for them (except in very few occasions) and the huge anti-incumbency factors always ensure, they are turned down during election season.Some of the major cons areAttitude problem: Most of the left leaders, particularly since 70s, maintained a very harsh outlook. While they get along with public well, they were never seen as pleasing personality or someone humble to the core. Barring Achutha Menon, EK Nayanar, no Communist Chief Minister ever bore a pleasing outlook to the public during their tenure. When I say pleasing, what I meant is humility and grounded level. For example, the most celebrated Communist CM EMS always known for his harsh speeches against his rivals and writes against them in most harsh way. VS Achuthandan who ruled Kerala from 2006–2011 always had a Stalinist image, be his body language or communication style. However he was extremely popular among people due to his aggressive stand on moral values and rights even against Party leadership. Currently Pinarayi Vijayan is also too much harsh and rarely exhibits humility or even a simple smile. On contrary much of UDF leaders are extremely grounded and public friendly. Its harder to find a single image of Karunakaran without smiles, AK Antony without folded hands and Oommen Chandy without crowds. These are one reason that make communist leaders looks too much harsh attitude unlike Congress leaders. Even though EK Nayanar was much better than normal CPM leaders, still he was known for his humorous satires and mocking rivals openly.Party Cell rule: The biggest complaint against Left leadership is that, whenever they come to power, almost all sectors of Government slowly becomes Party cells. Bureaucrats showing loyalty to the party gets preferential treatment and individual Party committees and its leaders gets an upper hand in various government offices. Whenever Left comes to power, its associated / affilated government unions takes over the key positions in government offices and thus several reports of less support those who are opposed to Party or non partisans. In many cases, government services do get delayed or even refused to those whom local party leaders considers as enemies (not those essential services by the way)Politicisation of Kerala police shows CPM doesn’t learn from its mistakesPolicing Issues: The most heard criticism against Left government is that whenever they comes to power, Police forces becomes totally partisan to Left leadership. Police often becomes an agent of Communist party rather public good. In last two years, current Left govt under Pinarayi Vijayan has faced extreme brickbats due to inefficiency of Policing. In majority cases, police forces either became partisian to ruling Left or have seriously lost morale, thus involved in various wrongdoings that has seriously affected the image of Kerala Police. Though Left governments has introduced several innovative concepts like Janamaithri schemes etc, its often reported lower end police officials who are affiliated to party gets more high handiness and they get involved with various wrong doings that damages the overall image. Equally party do openly get involved into various police actions, investigations which definitely harms their image in a big way. And Police often plays a role of spectators whenever CPM is involved in a crime, especially in Kannur Killings where they come only to arrest the criminals after they committed the crime, rather preventing the crime. Moreover often its reported, Police aids CPM by arresting dummy criminals as named by the party who are ready to sacrifice for them, instead of real criminals when comes to political murders of Kannur.Grave lapses, apathy by Kerala police but why has there been zero accountability?Kerala now has a new dread on the streets, an abusive police force that hapless citizens always fear - FirstpostPoor Social Protests: In Kerala, the key reason why Communists are respected whenever they are in opposition, is only because of their active or probably extraordinary level of intervention into various public issues and protests against government. CPM has numerous affilates and they all get involved actively against wrongdoings. For example, SFI actively protests, if fees of government institutions are hiked, if any private management reprimands students for wrong reasons or Government machinery lapses in education. Same way DYFI would be protesting against any social wrongs or corruption issues of the government. But when Left government comes to power, all these organizations go into deep hibernation, even in cases not directly against government. For example in much recent Trivandrum Law Academy protests, SFI almost stood as a spectators when their own fellow coalition partner’s student wing- AISF (CPI’s unit) completely took over the protests in a big way. Same way, DYFI will rarely open its mouth, even against the most worst form of nepotism or public wrongs in society when Left rules. This sort of double standards of these affiliates organizations, is one key issue, as CPM as a ruler absolutely ask all its affiliates to be silent, even if they don’t support the government on it.A report that came in a leading newspaper how children of CPM Leaders got privileged to be associated with various corporate companies.Support to crony capitalists : CPM often face criticisms for supporting and seeking patronage of few businessmen. Infact, its much similar to how BJP gets associated with some like Adani at national stage. CPM’s official ideology is always pro-poor, yet its too opposite to see them getting associated intensively with large Malayalee business magnates. Some of the their leader’s children with average educational qualifications and little professional experience ended up as Vice presidents and senior officials of companies owned by these business groups. This double standards always gets a serious attack over CPM Leadership as it works heavily to create nepotism. Left also got seriously attacked in recent years for supporting many Neo rich businessmen and their violation of environmental laws. Infact many do criticize heavily, CPM is actually taking all laundry bags of such neo rich for no reason, which indirectly highlights on nepotism issues.Nepotism Charges: CPM and most of left parties do keep a very high standards on record and in public domain. Naturally public do expect a lot from them. So when they behave like Congress politicians, it gets much deeper brickbats. Its normal anywhere in India, ruling party do give chances to their favourite men as well as within family members, a chance of being part of provisional government services or temporary employment in Government owned corporations or bodies. However this matter when CPM does always gets huge brickbats as they often talk and present reports on purity of political services. Naturally such behaviour doesn’t get accepted. Nepotism is one key charge often levied against many CPM ministers and did happen this time, when some of them appointed their newphews or son in laws in temporary Govt jobs. A minister who regarded as Number 2 in current government lost his ministerial berth only for appointing his son-in-law as manager of a government company. Media often takes extra interest in exposing the double standards of CPM as they often talk about purity and corruption free politics. Rarely CPM/Left gets involved in any serious scandals that involve money. But often get involved in nepotism charges, which is indeed too negative for them.The ongoing strike by former CPM workers against Left Government who decided to take their land for National Highway development. Its a typical example, how CPM with iron fist deals its own workers instead of a consensus.Iron Fist attitude: While in opposition, Left often talks about consensual politics, whereas in power they try to do implement changes with Iron Fist. Some of the worst police atrocities or forcible actions against protestors were recorded during Left rule. The recent issue of forcible Land acquisitions for National Highway is one such example, how Left rarely respect its own ideals while in opposition. This serves one prefect reason why Left gets voted out once in every 5 years.There could be many more pros and cons. I feel, these are most key according to me.Its true, both Left and Right (in Kerala concept, Right means UDF or Congress) are never angels…Nor devils too. Both has their pros and cons. While UDF is more Corruption oriented (with focus on economic development), LDF is more Arrogant Oriented (with focus on social development).The people of Kerala uses elections as a Pendulum to run from Corruption to Arrogance and then back and forth so.

What causes gravity? Do matter's properties have anything to do with gravity?

Interaction between celestial bodiesWhen two celestial bodies of comparable mass interact gravitationally, both orbit about a fixed point (the centre of mass of the two bodies). This point lies between the bodies on the line joining them at a position such that the products of the distance to each body with the mass of each body are equal. Thus, Earth and the Moon move in complementary orbits about their common centre of mass. The motion of Earth has two observable consequences. First, the direction of the Sun as seen from Earth relative to the very distant stars varies each month by about 12 arc seconds in addition to the Sun’s annual motion. Second, the line-of-sight velocity from Earth to a freely moving spacecraft varies each month by 2.04 metres per second, according to very accurate data obtained from radio tracking. From these results the Moon is found to have a mass 1/81 times that of Earth. With slight modifications Kepler’s laws remain valid for systems of two comparable masses; the foci of the elliptical orbits are the two-body centre-of-mass positions, and, putting M1+ M2n instead of MS in the expression of Kepler’s third law, equation (6), the third law reads:That agrees with equation (6) when one body is so small that its mass can be neglected. The rescaled formula can be used to determine the separate masses of binary stars (pairs of stars orbiting around each other) that are a known distance from the solar system. Equation (9) determines the sum of the masses; and, if R1 and R2 are the distances of the individual stars from the centre of mass, the ratio of the distances must balance the inverse ratio of the masses, and the sum of the distances is the total distance R. In symbolsThose relations are sufficient to determine the individual masses. Observations of the orbital motions of double stars, of the dynamic motions of stars collectively moving within their galaxies, and of the motions of the galaxies themselves verify that Newton’s law of gravity is valid to a high degree of accuracy throughout the visible universe.Ocean tides, phenomena that mystified thinkers for centuries, were also shown by Newton to be a consequence of the universal law of gravitation, although the details of the complicated phenomena were not understood until comparatively recently. They are caused specifically by the gravitational pull of the Moon and, to a lesser extent, of the Sun.Newton showed that the equatorial bulge of Earth was a consequence of the balance between the centrifugal forces of the rotation of Earth and the attractions of each particle of Earth on all others. The value of gravity at the surface of Earth increases in a corresponding way from the Equator to the poles. Among the data that Newton used to estimate the size of the equatorial bulge were the adjustments to his pendulum clock that the English astronomer Edmond Halley had to make in the course of his astronomical observations on the southern island of Saint Helena. Jupiter, which rotates faster than Earth, has a proportionally larger equatorial bulge, the difference between its polar and equatorial radii being about 10 percent. Another success of Newton’s theory was his demonstration that comets move in parabolic orbits under the gravitational attraction of the Sun. In a thorough analysis in the Principia, he showed that the great comet of 1680–81 did indeed follow a parabolic path.It was already known in Newton’s day that the Moon does not move in a simple Keplerian orbit. Later, more-accurate observations of the planets also showed discrepancies from Kepler’s laws. The motion of the Moon is particularly complex; however, apart from a long-term acceleration due to tides on Earth, the complexities can be accounted for by the gravitational attraction of the Sun and the planets. The gravitational attractions of the planets for each other explain almost all the features of their motions. The exceptions are nonetheless important. Uranus, the seventh planet from the Sun, was observed to undergo variations in its motion that could not be explained by perturbations from Saturn, Jupiter, and the other planets. Two 19th-century astronomers, John Couch Adams of Britain and Urbain-Jean-Joseph Le Verrier of France, independently assumed the presence of an unseen eighth planet that could produce the observed discrepancies. They calculated its position within a degree of where the planet Neptune was discovered in 1846. Measurements of the motion of the innermost planet, Mercury, over an extended period led astronomers to conclude that the major axis of this planet’s elliptical orbit precesses in space at a rate 43 arc seconds per century faster than could be accounted for from perturbations of the other planets. In this case, however, no other bodies could be found that could produce this discrepancy, and very slight modification of Newton’s law of gravitation seemed to be needed. Einstein’s theory of relativity precisely predicts this observed behaviour of Mercury’s orbit.Potential theoryFor irregular, nonspherical mass distributions in three dimensions, Newton’s original vector equation (4) is inefficient, though theoretically it could be used for finding the resulting gravitational field. The main progress in classical gravitational theory after Newton was the development of potential theory, which provides the mathematical representation of gravitational fields. It allows practical as well as theoretical investigation of the gravitational variations in space and of the anomalies due to the irregularities and shape deformations of Earth.Potential theory led to the following elegant formulation: the gravitational acceleration g is a function of position R, g(R), which at any point in space is given from a function Φ called the gravitational potential, by means of a generalization of the operation of differentiation:in which i, j, and k stand for unit basis vectors in a three-dimensional Cartesian coordinate system. The potential and therefore g are determined by an equation discovered by the French mathematician Siméon-Denis Poisson:where ρ(R) is the density at the vector position R.The significance of this approach is that Poisson’s equation can be solved under rather general conditions, which is not the case with Newton’s equation. When the mass density ρ is nonzero, the solution is expressed as the definite integral:where the integral is a three-dimensional integral over the volume of all space. When ρ = 0 (in particular, outside Earth), Poisson’s equation reduces to the simpler equation of Laplace.The appropriate coordinates for the region outside the nearly spherical Earth are spherical polar coordinates: R, the distance from the centre of Earth; θ, the colatitude measured from the North Pole; and the longitude measured from Greenwich. The solutions are series of powers of R multiplied by trigonometric functions of colatitude and longitude, known as spherical harmonics; the first terms are:The constants J2, J3, and so forth are determined by the detailed mass distribution of Earth; and, since Newton showed that for a spherical body all the Jn are zero, they must measure the deformation of Earth from a spherical shape. J2 measures the magnitude of Earth’s rotational equatorial bulge, J3 measures a slight pear-shaped deformation of Earth, and so on. The orbits of spacecraft around Earth, other planets, and the Moon deviate from simple Keplerian ellipses in consequence of the various spherical harmonic terms in the potential. Observations of such deviations were made for the very first artificial spacecraft. The parameters J2 and J3 for Earth have been found to be 1,082.7 × 10^−6 and −2.4 × 10^−6 , respectively. Very many other harmonic terms have been found in that way for Earth and also for the Moon and for other planets. Halley had already pointed out in the 18th century that the motions of the moons of Jupiter are perturbed from simple ellipses by the variation of gravity around Jupiter.The surface of the oceans, if tides and waves are ignored, is a surface of constant potential of gravity and rotation. If the only spherical harmonic term in gravity were that corresponding to the equatorial bulge, the sea surface would be just a spheroid of revolution (a surface formed by rotating a two-dimensional curve about some axis; for example, rotating an ellipse about its major axis produces an ellipsoid). Additional terms in the potential give rise to departures of the sea surface from that simple form. The actual form may be calculated from the sum of the known harmonic terms, but it is now possible to measure the form of the sea surface itself directly by laser ranging from spacecraft. Whether found indirectly by calculation or directly by measurement, the form of the sea surface may be shown as contours of its deviation from the simple spheroid of revolution.The variation in the gravitational field, given in milliGals (mGal), over the Earth’s surface gives …Encyclopædia Britannica, Inc.Effects of local mass differencesSpherical harmonics are the natural way of expressing the large-scale variations of potential that arise from the deep structure of Earth. However, spherical harmonics are not suitable for local variations due to more-superficial structures. Not long after Newton’s time, it was found that the gravity on top of large mountains is less than expected on the basis of their visible mass. The idea of isostasy was developed, according to which the unexpectedly low acceleration of gravity on a mountain is caused by low-density rock 30 to 100 km underground, which buoys up the mountain. Correspondingly, the unexpectedly high force of gravity on ocean surfaces is explained by dense rock 10 to 30 km beneath the ocean bottom.Portable gravimeters, which can detect variations of one part in 10^9in the gravitational force, are in wide use today for mineral and oil prospecting. Unusual underground deposits reveal their presence by producing local gravitational variations.Weighing the EarthThe mass of Earth can be calculated from its radius and g if G is known. G was measured by the English physicist-chemist Henry Cavendish and other early experimenters, who spoke of their work as “weighing the Earth.” The mass of Earth is about 5.98 × 10^24kg, while the mean densities of Earth, the Sun, and the Moon are, respectively, 5.52, 1.43, and 3.3 times that of water.What causes Gravity?One of Einstein's old tutors, a man by the name of Minkowski showed that the special theory of relativity could be expressed in an interesting way.The world we live in consists of four dimensions, the three space dimensions and one that is not exactly time but is related to time (it is in fact time multiplied by the square root of -1). This is not at all easy to understand but it means that space-time as we call it has some rather weird properties. In particular, when you move through one of the space dimensions you also travel, unwittingly, through time. You do not notice this, indeed as far as you are concerned nothing happens to you at all, but someone observing you would say that you have travelled through time. Of course, we are always travelling through time, but when you travel through space you travel through time by less that you expect. The most famous example of this effect is the "Twins Paradox".All the effects of special relativity, such as the slowing down of clocks and the shrinking of rods follow from the above. In fact, it is often better to think of some things, such as electromagnetic fields as being four-dimensional objects. However, the important thing to remember for the moment, is: when you move through space you are compelled to move through time but, when you move through time (which of course you are always doing) you do not have to move through space.So, what does this have to do with gravity? It is quite simple! When a mass is present in the above space-time it distorts it so that whilst it remains true that travelling through space causes you to travel through time, travelling through time now causes you to move (accelerate) through space but as stated above, when you move through time (which of course you are always doing) you do not have to move through space. In other words, when you are compelled to move through space - this is gravity.The particular advantage of this theory of gravity (General Relativity) is that it explains, at a stroke, all the observed properties of gravity. For example the fact that it acts equally on all objects and substances becomes obvious when you thing of gravity as a distortion of space-time rather than a force.Imagine that you are in free space, away from any planets or stars, when suddenly a planet is created quite close to you. You would not be aware that anything is happening to you, you would feel no force, but you would find that you started to accelerate towards the planet. This is just like the case where you travel through space, you are not aware that you have also travelled through time but people observing you are.Can you feel gravity?You can argue that we do not. What happens is that, as you pass through time, the distortion of space-time caused by the presence of the earth accelerates your body towards the centre of the earth. However, when your feet are touching the ground, the ground exerts a force on your feet in an upwards direction which pushes you in the opposite direction. In other words you are being accelerated upwards with respect to space-time by the force of the ground acting on your feet. It is exactly the same as the force which seems to push you back in your car seat when you accelerate, what is really happening is that the seat is pushing you forwards.

Is it possible to ever get a 100% accurate result in science when applied in real life? Can we calculate the real life exact speed of a ball falling down?

THE UNCERTAINTY OF MEASUREMENTSSome numerical statements are exact: Mary has 3 brothers, and 2 + 2 = 4. However, all measurements have some degree of uncertainty that may come from a variety of sources. The process of evaluating the uncertainty associated with a measurement result is often called uncertainty analysis or error analysis.The complete statement of a measured value should include an estimate of the level of confidence associated with the value. Properly reporting an experimental result along with its uncertainty allows other people to make judgments about the quality of the experiment, and it facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an uncertainty estimate, it is impossible to answer the basic scientific question: "Does my result agree with a theoretical prediction or results from other experiments?" This question is fundamental for deciding if a scientific hypothesis is confirmed or refuted.When we make a measurement, we generally assume that some exact or true value exists based on how we define what is being measured. While we may never know this true value exactly, we attempt to find this ideal quantity to the best of our ability with the time and resources available. As we make measurements by different methods, or even when making multiple measurements using the same method, we may obtain slightly different results. So how do we report our findings for our best estimate of this elusive true value? The most common way to show the range of values that we believe includes the true value is:( 1 )measurement = (best estimate ± uncertainty) unitsLet's take an example. Suppose you want to find the mass of a gold ring that you would like to sell to a friend. You do not want to jeopardize your friendship, so you want to get an accurate mass of the ring in order to charge a fair market price. You estimate the mass to be between 10 and 20 grams from how heavy it feels in your hand, but this is not a very precise estimate. After some searching, you find an electronic balance that gives a mass reading of 17.43 grams. While this measurement is much more precise than the original estimate, how do you know that it is accurate, and how confident are you that this measurement represents the true value of the ring's mass? Since the digital display of the balance is limited to 2 decimal places, you could report the mass as m = 17.43 ± 0.01 g. Suppose you use the same electronic balance and obtain several more readings: 17.46 g, 17.42 g, 17.44 g, so that the average mass appears to be in the range of 17.44 ± 0.02 g. By now you may feel confident that you know the mass of this ring to the nearest hundredth of a gram, but how do you know that the true value definitely lies between 17.43 g and 17.45 g? Since you want to be honest, you decide to use another balance that gives a reading of 17.22 g. This value is clearly below the range of values found on the first balance, and under normal circumstances, you might not care, but you want to be fair to your friend. So what do you do now? The answer lies in knowing something about the accuracy of each http://instrument.To help answer these questions, we should first define the terms accuracy and precision:Accuracy is the closeness of agreement between a measured value and a true or accepted value. Measurement error is the amount of inaccuracy.Precision is a measure of how well a result can be determined (without reference to a theoretical or true value). It is the degree of consistency and agreement among independent measurements of the same quantity; also the reliability or reproducibility of the result.The uncertainty estimate associated with a measurement should account for both the accuracy and precision of the measurement.Note: Unfortunately the terms error and uncertainty are often used interchangeably to describe both imprecision and inaccuracy. This usage is so common that it is impossible to avoid entirely. Whenever you encounter these terms, make sure you understand whether they refer to accuracy or precision, or both.Notice that in order to determine the accuracy of a particular measurement, we have to know the ideal, true value. Sometimes we have a "textbook" measured value, which is well known, and we assume that this is our "ideal" value, and use it to estimate the accuracy of our result. Other times we know a theoretical value, which is calculated from basic principles, and this also may be taken as an "ideal" value. But physics is an empirical science, which means that the theory must be validated by experiment, and not the other way around. We can escape these difficulties and retain a useful definition of accuracy by assuming that, even when we do not know the true value, we can rely on the best available accepted value with which to compare our experimental value.For our example with the gold ring, there is no accepted value with which to compare, and both measured values have the same precision, so we have no reason to believe one more than the other. We could look up the accuracy specifications for each balance as provided by the manufacturer (the Appendix at the end of this lab manual contains accuracy data for most instruments you will use), but the best way to assess the accuracy of a measurement is to compare with a known standard. For this situation, it may be possible to calibrate the balances with a standard mass that is accurate within a narrow tolerance and is traceable to a primary mass standard at the National Institute of Standards and Technology (NIST). Calibrating the balances should eliminate the discrepancy between the readings and provide a more accurate mass measurement.Precision is often reported quantitatively by using relative or fractional uncertainty:( 2 )Relative Uncertainty = uncertaintymeasured quantityExample: m = 75.5 ± 0.5 g has a fractional uncertainty of: 0.5 g75.5 g = 0.006 = 0.7%. Accuracy is often reported quantitatively by using relative error:( 3 )Relative Error = measured value − expected valueexpected valueIf the expected value for m is 80.0 g, then the relative error is: 75.5 − 80.080.0 = −0.056 = −5.6%Note: The minus sign indicates that the measured value is less than the expected value.When analyzing experimental data, it is important that you understand the difference between precision and accuracy. Precision indicates the quality of the measurement, without any guarantee that the measurement is "correct." Accuracy, on the other hand, assumes that there is an ideal value, and tells how far your answer is from that ideal, "right" answer. These concepts are directly related to random and systematic measurement errors.TYPES OF ERRORSMeasurement errors may be classified as either random or systematic, depending on how the measurement was obtained (an instrument could cause a random error in one situation and a systematic error in another).Random errors are statistical fluctuations (in either direction) in the measured data due to the precision limitations of the measurement device. Random errors can be evaluated through statistical analysis and can be reduced by averaging over a large number of observations (see standard error).Systematic errors are reproducible inaccuracies that are consistently in the same direction. These errors are difficult to detect and cannot be analyzed statistically. If a systematic error is identified when calibrating against a standard, applying a correction or correction factor to compensate for the effect can reduce the bias. Unlike random errors, systematic errors cannot be detected or reduced by increasing the number of observations.When making careful measurements, our goal is to reduce as many sources of error as possible and to keep track of those errors that we can not eliminate. It is useful to know the types of errors that may occur, so that we may recognize them when they arise. Common sources of error in physics laboratory experiments:Incomplete definition (may be systematic or random) — One reason that it is impossible to make exact measurements is that the measurement is not always clearly defined. For example, if two different people measure the length of the same string, they would probably get different results because each person may stretch the string with a different tension. The best way to minimize definition errors is to carefully consider and specify the conditions that could affect the measurement.Failure to account for a factor (usually systematic) — The most challenging part of designing an experiment is trying to control or account for all possible factors except the one independent variable that is being analyzed. For instance, you may inadvertently ignore air resistance when measuring free-fall acceleration, or you may fail to account for the effect of the Earth's magnetic field when measuring the field near a small magnet. The best way to account for these sources of error is to brainstorm with your peers about all the factors that could possibly affect your result. This brainstorm should be done before beginning the experiment in order to plan and account for the confounding factors before taking data. Sometimes a correction can be applied to a result after taking data to account for an error that was not detected earlier.Environmental factors (systematic or random) — Be aware of errors introduced by your immediate working environment. You may need to take account for or protect your experiment from vibrations, drafts, changes in temperature, and electronic noise or other effects from nearby apparatus.Instrument resolution (random) — All instruments have finite precision that limits the ability to resolve small measurement differences. For instance, a meter stick cannot be used to distinguish distances to a precision much better than about half of its smallest scale division (0.5 mm in this case). One of the best ways to obtain more precise measurements is to use a null difference method instead of measuring a quantity directly. Null or balance methods involve using instrumentation to measure the difference between two similar quantities, one of which is known very accurately and is adjustable. The adjustable reference quantity is varied until the difference is reduced to zero. The two quantities are then balanced and the magnitude of the unknown quantity can be found by comparison with a measurement standard. With this method, problems of source instability are eliminated, and the measuring instrument can be very sensitive and does not even need a scale.Calibration (systematic) — Whenever possible, the calibration of an instrument should be checked before taking data. If a calibration standard is not available, the accuracy of the instrument should be checked by comparing with another instrument that is at least as precise, or by consulting the technical data provided by the manufacturer. Calibration errors are usually linear (measured as a fraction of the full scale reading), so that larger values result in greater absolute errors.Zero offset (systematic) — When making a measurement with a micrometer caliper, electronic balance, or electrical meter, always check the zero reading first. Re-zero the instrument if possible, or at least measure and record the zero offset so that readings can be corrected later. It is also a good idea to check the zero reading throughout the experiment. Failure to zero a device will result in a constant error that is more significant for smaller measured values than for larger ones.Physical variations (random) — It is always wise to obtain multiple measurements over the widest range possible. Doing so often reveals variations that might otherwise go undetected. These variations may call for closer examination, or they may be combined to find an average value.Parallax (systematic or random) — This error can occur whenever there is some distance between the measuring scale and the indicator used to obtain a measurement. If the observer's eye is not squarely aligned with the pointer and scale, the reading may be too high or low (some analog meters have mirrors to help with this alignment).Instrument drift (systematic) — Most electronic instruments have readings that drift over time. The amount of drift is generally not a concern, but occasionally this source of error can be significant.Lag time and hysteresis (systematic) — Some measuring devices require time to reach equilibrium, and taking a measurement before the instrument is stable will result in a measurement that is too high or low. A common example is taking temperature readings with a thermometer that has not reached thermal equilibrium with its environment. A similar effect is hysteresis where the instrument readings lag behind and appear to have a "memory" effect, as data are taken sequentially moving up or down through a range of values. Hysteresis is most commonly associated with materials that become magnetized when a changing magnetic field is applied.Personal errors come from carelessness, poor technique, or bias on the part of the experimenter. The experimenter may measure incorrectly, or may use poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to agree with the expected outcome.Gross personal errors, sometimes called mistakes or blunders, should be avoided and corrected if discovered. As a rule, personal errors are excluded from the error analysis discussion because it is generally assumed that the experimental result was obtained by following correct procedures. The term human error should also be avoided in error analysis discussions because it is too general to be useful.ESTIMATING EXPERIMENTAL UNCERTAINTY FOR A SINGLE MEASUREMENTAny measurement you make will have some uncertainty associated with it, no matter the precision of your measuring tool. So how do you determine and report this uncertainty?The uncertainty of a single measurement is limited by the precision and accuracy of the measuring instrument, along with any other factors that might affect the ability of the experimenter to make the measurement.For example, if you are trying to use a meter stick to measure the diameter of a tennis ball, the uncertainty might be ± 5 mm, but if you used a Vernier caliper, the uncertainty could be reduced to maybe ± 2 mm. The limiting factor with the meter stick is parallax, while the second case is limited by ambiguity in the definition of the tennis ball's diameter (it's fuzzy!). In both of these cases, the uncertainty is greater than the smallest divisions marked on the measuring tool (likely 1 mm and 0.05 mm respectively). Unfortunately, there is no general rule for determining the uncertainty in all measurements. The experimenter is the one who can best evaluate and quantify the uncertainty of a measurement based on all the possible factors that affect the result. Therefore, the person making the measurement has the obligation to make the best judgment possible and report the uncertainty in a way that clearly explains what the uncertainty represents:( 4 )Measurement = (measured value ± standard uncertainty) unit of measurementwhere the ± standard uncertainty indicates approximately a 68% confidence interval (see sections on Standard Deviation and Reporting Uncertainties).Example: Diameter of tennis ball = 6.7 ± 0.2 cm.ESTIMATING UNCERTAINTY IN REPEATED MEASUREMENTSSuppose you time the period of oscillation of a pendulum using a digital instrument (that you assume is measuring accurately) and find: T = 0.44 seconds. This single measurement of the period suggests a precision of ±0.005 s, but this instrument precision may not give a complete sense of the uncertainty. If you repeat the measurement several times and examine the variation among the measured values, you can get a better idea of the uncertainty in the period. For example, here are the results of 5 measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.( 5 )Average (mean) = x1 + x2 + + xNNFor this situation, the best estimate of the period is the average, or mean.Whenever possible, repeat a measurement several times and average the results. This average is generally the best estimate of the "true" value (unless the data set is skewed by one or more outliers which should be examined to determine if they are bad data points that should be omitted from the average or valid measurements that require further investigation). Generally, the more repetitions you make of a measurement, the better this estimate will be, but be careful to avoid wasting time taking more measurements than is necessary for the precision required.Consider, as another example, the measurement of the width of a piece of paper using a meter stick. Being careful to keep the meter stick parallel to the edge of the paper (to avoid a systematic error which would cause the measured value to be consistently higher than the correct value), the width of the paper is measured at a number of points on the sheet, and the values obtained are entered in a data table. Note that the last digit is only a rough estimate, since it is difficult to read a meter stick to the nearest tenth of a millimeter (0.01 cm).( 6 )Average = sum of observed widthsno. of observations = 155.96 cm5 = 31.19 cmThis average is the best available estimate of the width of the piece of paper, but it is certainly not exact. We would have to average an infinite number of measurements to approach the true mean value, and even then, we are not guaranteed that the mean value is accurate because there is still some systematic error from the measuring tool, which can never be calibrated perfectly. So how do we express the uncertainty in our average value?One way to express the variation among the measurements is to use the average deviation. This statistic tells us on average (with 50% confidence) how much the individual measurements vary from the mean.( 7 )d = |x1 − x| + |x2 − x| + + |xN − x|NHowever, the standard deviation is the most common way to characterize the spread of a data set. The standard deviation is always slightly greater than the average deviation, and is used because of its association with the normal distribution that is frequently encountered in statistical analyses.STANDARD DEVIATIONTo calculate the standard deviation for a sample of N measurements:1Sum all the measurements and divide by N to get the average, or mean.2Now, subtract this average from each of the N measurements to obtain N "deviations".3Square each of these N deviations and add them all up.4Divide this result by (N − 1) and take the square root.We can write out the formula for the standard deviation as follows. Let the N measurements be called x1, x2, ..., xN. Let the average of the N values be called x. Then each deviation is given by δxi = xi − x, for i = 1, 2, , N. The standard deviation is:( 8 )s = (δx12 + δx22 + + δxN2)(N − 1) = δxi2(N − 1)In our previous example, the average width x is 31.19 cm. The deviations are:The average deviation is: d = 0.086 cm. The standard deviation is: s = (0.14)2 + (0.04)2 + (0.07)2 + (0.17)2 + (0.01)25 − 1 = 0.12 cm. The significance of the standard deviation is this: if you now make one more measurement using the same meter stick, you can reasonably expect (with about 68% confidence) that the new measurement will be within 0.12 cm of the estimated average of 31.19 cm. In fact, it is reasonable to use the standard deviation as the uncertainty associated with this single new measurement. However, the uncertainty of the average value is the standard deviation of the mean, which is always less than the standard deviation (see next section).Consider an example where 100 measurements of a quantity were made. The average or mean value was 10.5 and the standard deviation was s = 1.83. The figure below is a histogram of the 100 measurements, which shows how often a certain range of values was measured. For example, in 20 of the measurements, the value was in the range 9.5 to 10.5, and most of the readings were close to the mean value of 10.5. The standard deviation s for this set of measurements is roughly how far from the average value most of the readings fell. For a large enough sample, approximately 68% of the readings will be within one standard deviation of the mean value, 95% of the readings will be in the interval x ± 2 s, and nearly all (99.7%) of readings will lie within 3 standard deviations from the mean. The smooth curve superimposed on the histogram is the gaussian or normal distribution predicted by theory for measurements involving random errors. As more and more measurements are made, the histogram will more closely follow the bellshaped gaussian curve, but the standard deviation of the distribution will remain approximately the same.Figure 1STANDARD DEVIATION OF THE MEAN (STANDARD ERROR)When we report the average value of N measurements, the uncertainty we should associate with this average value is the standard deviation of the mean, often called the standard error (SE).( 9 )σx = sNThe standard error is smaller than the standard deviation by a factor of 1/N. This reflects the fact that we expect the uncertainty of the average value to get smaller when we use a larger number of measurements, N. In the previous example, we find the standard error is 0.05 cm, where we have divided the standard deviation of 0.12 by 5. The final result should then be reported as: Average paper width = 31.19 ± 0.05 cm.ANOMALOUS DATAThe first step you should take in analyzing data (and even while taking data) is to examine the data set as a whole to look for patterns and outliers. Anomalous data points that lie outside the general trend of the data may suggest an interesting phenomenon that could lead to a new discovery, or they may simply be the result of a mistake or random fluctuations. In any case, an outlier requires closer examination to determine the cause of the unexpected result. Extreme data should never be "thrown out" without clear justification and explanation, because you may be discarding the most significant part of the investigation! However, if you can clearly justify omitting an inconsistent data point, then you should exclude the outlier from your analysis so that the average value is not skewed from the "true" mean.FRACTIONAL UNCERTAINTY REVISITEDWhen a reported value is determined by taking the average of a set of independent readings, the fractional uncertainty is given by the ratio of the uncertainty divided by the average value. For this example,( 10 )Fractional uncertainty = uncertaintyaverage = 0.05 cm31.19 cm = 0.0016 ≈ 0.2%Note that the fractional uncertainty is dimensionless but is often reported as a percentage or in parts per million (ppm) to emphasize the fractional nature of the value. A scientist might also make the statement that this measurement "is good to about 1 part in 500" or "precise to about 0.2%".The fractional uncertainty is also important because it is used in propagating uncertainty in calculations using the result of a measurement, as discussed in the next section.PROPAGATION OF UNCERTAINTYSuppose we want to determine a quantity f, which depends on x and maybe several other variables y, z, etc. We want to know the error in f if we measure x, y, ... with errors σx, σy, ...Examples:( 11 )f = xy (Area of a rectangle)( 12 )f = p cos θ (x-component of momentum)( 13 )f = x/t (velocity)For a single-variable function f(x), the deviation in f can be related to the deviation in x using calculus:( 14 )δf = dfdxδxThus, taking the square and the average:( 15 )δf2 = dfdx2 δx2and using the definition of σ, we get:( 16 )σf = dfdxσxExamples:(a) f = x( 17 )dfdx = 12x( 18 )σf = σx2x, or σff = 12 σxx(b) f = x2( 19 )dfdx = 2x( 20 )σff = 2 σxx(c) f = cos θ( 21 )dfdθ = −sinθ( 22 )σf = |sin θ|σθ, or σff = |tan θ|σθNote: in this situation, σθ must be in radians.In the case where f depends on two or more variables, the derivation above can be repeated with minor modification. For two variables, f(x, y), we have:( 23 )δf = ∂f∂xδx + ∂f∂yδyThe partial derivative ∂f∂x means differentiating f with respect to x holding the other variables fixed. Taking the square and the average, we get the law of propagation of uncertainty:( 24 )(δf)2 = ∂f∂x2(δx)2 + ∂f∂y2(δy)2 + 2∂f∂x∂f∂yδx δyIf the measurements of x and y are uncorrelated, then δx δy = 0, and we get:( 25 )σf = ∂f∂x2 σx2 + ∂f∂y2 σy2Examples:(a) f = x + y( 26 )∂f∂x = 1, ∂f∂y = 1( 27 )∴ σf = σx2 + σy2When adding (or subtracting) independent measurements, the absolute uncertainty of the sum (or difference) is the root sum of squares (RSS) of the individual absolute uncertainties. When adding correlated measurements, the uncertainty in the result is simply the sum of the absolute uncertainties, which is always a larger uncertainty estimate than adding in quadrature (RSS). Adding or subtracting a constant does not change the absolute uncertainty of the calculated value as long as the constant is an exact value.(b) f = xy( 28 )∂f∂x = y, ∂f∂y = x( 29 )∴ σf = y2σx2 + x2σy2Dividing the previous equation by f = xy, we get:( 30 )σff = σxx2 + σyy2(c) f = x/y( 31 )∂f∂x = 1y, ∂f∂y = − xy2( 32 )∴ σf = 1y2σx2 + xy22σy2Dividing the previous equation by f = x/y, we get:( 33 )σff = σxx2 + σyy2When multiplying (or dividing) independent measurements, the relative uncertainty of the product (quotient) is the RSS of the individual relative uncertainties. When multiplying correlated measurements, the uncertainty in the result is just the sum of the relative uncertainties, which is always a larger uncertainty estimate than adding in quadrature (RSS). Multiplying or dividing by a constant does not change the relative uncertainty of the calculated value.Note that the relative uncertainty in f, as shown in (b) and (c) above, has the same form for multiplication and division: the relative uncertainty in a product or quotient depends on the relative uncertainty of each individual term.Example: Find uncertainty in v, where v = at with a = 9.8 ± 0.1 m/s2, t = 1.2 ± 0.1 s( 34 )σvv = σaa2 + σtt2 = 0.19.8 + 0.11.2 = (0.010)2 + (0.029)2 = 0.031 or 3.1%Notice that the relative uncertainty in t (2.9%) is significantly greater than the relative uncertainty for a (1.0%), and therefore the relative uncertainty in v is essentially the same as for t (about 3%).Graphically, the RSS is like the Pythagorean theorem:Figure 2The total uncertainty is the length of the hypotenuse of a right triangle with legs the length of each uncertainty component.Timesaving approximation: "A chain is only as strong as its weakest link."If one of the uncertainty terms is more than 3 times greater than the other terms, the root-squares formula can be skipped, and the combined uncertainty is simply the largest uncertainty. This shortcut can save a lot of time without losing any accuracy in the estimate of the overall uncertainty.THE UPPER-LOWER BOUND METHOD OF UNCERTAINTY PROPAGATIONAn alternative, and sometimes simpler procedure, to the tedious propagation of uncertainty law is the upper-lower bound method of uncertainty propagation. This alternative method does not yield a standard uncertainty estimate (with a 68% confidence interval), but it does give a reasonable estimate of the uncertainty for practically any situation. The basic idea of this method is to use the uncertainty ranges of each variable to calculate the maximum and minimum values of the function. You can also think of this procedure as examining the best and worst case scenarios. For example, suppose you measure an angle to be: θ = 25° ± 1° and you needed to find f = cos θ, then:( 35 )fmax= cos(26°) = 0.8988( 36 )fmin= cos(24°) = 0.9135( 37 )∴ f = 0.906 ± 0.007 where 0.007 is half the difference between fmax and fminNote that even though θ was only measured to 2 significant figures, f is known to 3 figures. By using the propagation of uncertainty law: σf = |sin θ|σθ = (0.423)(π/180) = 0.0074 (same result as above).The uncertainty estimate from the upper-lower bound method is generally larger than the standard uncertainty estimate found from the propagation of uncertainty law, but both methods will give a reasonable estimate of the uncertainty in a calculated value.The upper-lower bound method is especially useful when the functional relationship is not clear or is incomplete. One practical application is forecasting the expected range in an expense budget. In this case, some expenses may be fixed, while others may be uncertain, and the range of these uncertain terms could be used to predict the upper and lower bounds on the total expense.SIGNIFICANT FIGURESThe number of significant figures in a value can be defined as all the digits between and including the first non-zero digit from the left, through the last digit. For instance, 0.44 has two significant figures, and the number 66.770 has 5 significant figures. Zeroes are significant except when used to locate the decimal point, as in the number 0.00030, which has 2 significant figures. Zeroes may or may not be significant for numbers like 1200, where it is not clear whether two, three, or four significant figures are indicated. To avoid this ambiguity, such numbers should be expressed in scientific notation to (e.g. 1.20 × 103 clearly indicates three significant figures).When using a calculator, the display will often show many digits, only some of which are meaningful (significant in a different sense). For example, if you want to estimate the area of a circular playing field, you might pace off the radius to be 9 meters and use the formula: A = πr2. When you compute this area, the calculator might report a value of 254.4690049 m2. It would be extremely misleading to report this number as the area of the field, because it would suggest that you know the area to an absurd degree of precision—to within a fraction of a square millimeter! Since the radius is only known to one significant figure, the final answer should also contain only one significant figure: Area = 3 × 102 m2.From this example, we can see that the number of significant figures reported for a value implies a certain degree of precision. In fact, the number of significant figures suggests a rough estimate of the relative uncertainty:The number of significant figures implies an approximate relative uncertainty:1 significant figure suggests a relative uncertainty of about 10% to 100%2 significant figures suggest a relative uncertainty of about 1% to 10%3 significant figures suggest a relative uncertainty of about 0.1% to 1%To understand this connection more clearly, consider a value with 2 significant figures, like 99, which suggests an uncertainty of ±1, or a relative uncertainty of ±1/99 = ±1%. (Actually some people might argue that the implied uncertainty in 99 is ±0.5 since the range of values that would round to 99 are 98.5 to 99.4. But since the uncertainty here is only a rough estimate, there is not much point arguing about the factor of two.) The smallest 2-significant figure number, 10, also suggests an uncertainty of ±1, which in this case is a relative uncertainty of ±1/10 = ±10%. The ranges for other numbers of significant figures can be reasoned in a similar manner.USE OF SIGNIFICANT FIGURES FOR SIMPLE PROPAGATION OF UNCERTAINTYBy following a few simple rules, significant figures can be used to find the appropriate precision for a calculated result for the four most basic math functions, all without the use of complicated formulas for propagating uncertainties.For multiplication and division, the number of significant figures that are reliably known in a product or quotient is the same as the smallest number of significant figures in any of the original numbers.Example:6.6 ×7328.7 48369.42 = 48 × 103 (2 significant figures)(5 significant figures)(2 significant figures)For addition and subtraction, the result should be rounded off to the last decimal place reported for the least precise number.Examples:223.64 5560.5+54+0.0082785560.5 If a calculated number is to be used in further calculations, it is good practice to keep one extra digit to reduce rounding errors that may accumulate. Then the final answer should be rounded according to the above guidelines.UNCERTAINTY, SIGNIFICANT FIGURES, AND ROUNDINGFor the same reason that it is dishonest to report a result with more significant figures than are reliably known, the uncertainty value should also not be reported with excessive precision.For example, it would be unreasonable for a student to report a result like:( 38 )measured density = 8.93 ± 0.475328 g/cm3 WRONG!The uncertainty in the measurement cannot possibly be known so precisely! In most experimental work, the confidence in the uncertainty estimate is not much better than about ±50% because of all the various sources of error, none of which can be known exactly. Therefore, uncertainty values should be stated to only one significant figure (or perhaps 2 sig. figs. if the first digit is a 1).Because experimental uncertainties are inherently imprecise, they should be rounded to one, or at most two, significant http://figures.To help give a sense of the amount of confidence that can be placed in the standard deviation, the following table indicates the relative uncertainty associated with the standard deviation for various sample sizes. Note that in order for an uncertainty value to be reported to 3 significant figures, more than 10,000 readings would be required to justify this degree of precision!*The relative uncertainty is given by the approximate formula: σσσ = 12(N − 1) When an explicit uncertainty estimate is made, the uncertainty term indicates how many significant figures should be reported in the measured value (not the other way around!). For example, the uncertainty in the density measurement above is about 0.5 g/cm3, so this tells us that the digit in the tenths place is uncertain, and should be the last one reported. The other digits in the hundredths place and beyond are insignificant, and should not be reported: measured density = 8.9 ± 0.5 g/cm3. RIGHT!An experimental value should be rounded to be consistent with the magnitude of its uncertainty. This generally means that the last significant figure in any reported value should be in the same decimal place as the uncertainty.In most instances, this practice of rounding an experimental result to be consistent with the uncertainty estimate gives the same number of significant figures as the rules discussed earlier for simple propagation of uncertainties for adding, subtracting, multiplying, and dividing.Caution: When conducting an experiment, it is important to keep in mind that precision is expensive (both in terms of time and material resources). Do not waste your time trying to obtain a precise result when only a rough estimate is required. The cost increases exponentially with the amount of precision required, so the potential benefit of this precision must be weighed against the extra cost.COMBINING AND REPORTING UNCERTAINTIESIn 1993, the International Standards Organization (ISO) published the first official worldwide Guide to the Expression of Uncertainty in Measurement. Before this time, uncertainty estimates were evaluated and reported according to different conventions depending on the context of the measurement or the scientific discipline. Here are a few key points from this 100-page guide, which can be found in modified form on the NIST website.When reporting a measurement, the measured value should be reported along with an estimate of the total combined standard uncertainty Uc of the value. The total uncertainty is found by combining the uncertainty components based on the two types of uncertainty analysis:Type A evaluation of standard uncertainty - method of evaluation of uncertainty by the statistical analysis of a series of observations. This method primarily includes random errors.Type B evaluation of standard uncertainty - method of evaluation of uncertainty by means other than the statistical analysis of series of observations. This method includes systematic errors and any other uncertainty factors that the experimenter believes are important.The individual uncertainty components ui should be combined using the law of propagation of uncertainties, commonly called the "root-sum-of-squares" or "RSS" method. When this is done, the combined standard uncertainty should be equivalent to the standard deviation of the result, making this uncertainty value correspond with a 68% confidence interval. If a wider confidence interval is desired, the uncertainty can be multiplied by a coverage factor (usually k = 2 or 3) to provide an uncertainty range that is believed to include the true value with a confidence of 95% (for k = 2) or 99.7% (for k = 3). If a coverage factor is used, there should be a clear explanation of its meaning so there is no confusion for readers interpreting the significance of the uncertainty value.You should be aware that the ± uncertainty notation may be used to indicate different confidence intervals, depending on the scientific discipline or context. For example, a public opinion poll may report that the results have a margin of error of ±3%, which means that readers can be 95% confident (not 68% confident) that the reported results are accurate within 3 percentage points. Similarly, a manufacturer's tolerance rating generally assumes a 95% or 99% level of confidence.CONCLUSION: "WHEN DO MEASUREMENTS AGREE WITH EACH OTHER?"We now have the resources to answer the fundamental scientific question that was asked at the beginning of this error analysis discussion: "Does my result agree with a theoretical prediction or results from other experiments?"Generally speaking, a measured result agrees with a theoretical prediction if the prediction lies within the range of experimental uncertainty. Similarly, if two measured values have standard uncertainty ranges that overlap, then the measurements are said to be consistent (they agree). If the uncertainty ranges do not overlap, then the measurements are said to be discrepant (they do not agree). However, you should recognize that these overlap criteria can give two opposite answers depending on the evaluation and confidence level of the uncertainty. It would be unethical to arbitrarily inflate the uncertainty range just to make a measurement agree with an expected value. A better procedure would be to discuss the size of the difference between the measured and expected values within the context of the uncertainty, and try to discover the source of the discrepancy if the difference is truly significant. To examine your own data, you are encouraged to use the Measurement Comparison tool available on the lab website.Here are some examples using this graphical analysis tool:Figure 3A = 1.2 ± 0.4 B = 1.8 ± 0.4 These measurements agree within their uncertainties, despite the fact that the percent difference between their central values is 40%.However, with half the uncertainty ± 0.2, these same measurements do not agree since their uncertainties do not overlap. Further investigation would be needed to determine the cause for the discrepancy. Perhaps the uncertainties were underestimated, there may have been a systematic error that was not considered, or there may be a true difference between these values.Figure 4An alternative method for determining agreement between values is to calculate the difference between the values divided by their combined standard uncertainty. This ratio gives the number of standard deviations separating the two values. If this ratio is less than 1.0, then it is reasonable to conclude that the values agree. If the ratio is more than 2.0, then it is highly unlikely (less than about 5% probability) that the values are the same.Example from above with u = 0.4: |1.2 − 1.8|0.57 = 1.1. Therefore, A and B likely agree.Example from above with u = 0.2: |1.2 − 1.8|0.28 = 2.1. Therefore, it is unlikely that A and B agree.

View Our Customer Reviews

Overall good experience. Not that it works right off the box. Once done then it works great. They certainly can improve there licensing model a bit more so that it can work off of network.

Justin Miller