Appendix C: Functional Cross-Reference Transition: Fill & Download for Free

GET FORM

Download the form

How to Edit and sign Appendix C: Functional Cross-Reference Transition Online

Read the following instructions to use CocoDoc to start editing and finalizing your Appendix C: Functional Cross-Reference Transition:

  • To begin with, look for the “Get Form” button and tap it.
  • Wait until Appendix C: Functional Cross-Reference Transition is appeared.
  • Customize your document by using the toolbar on the top.
  • Download your customized form and share it as you needed.
Get Form

Download the form

An Easy Editing Tool for Modifying Appendix C: Functional Cross-Reference Transition on Your Way

Open Your Appendix C: Functional Cross-Reference Transition Instantly

Get Form

Download the form

How to Edit Your PDF Appendix C: Functional Cross-Reference Transition Online

Editing your form online is quite effortless. You don't need to get any software via your computer or phone to use this feature. CocoDoc offers an easy tool to edit your document directly through any web browser you use. The entire interface is well-organized.

Follow the step-by-step guide below to eidt your PDF files online:

  • Find CocoDoc official website on your laptop where you have your file.
  • Seek the ‘Edit PDF Online’ option and tap it.
  • Then you will visit this awesome tool page. Just drag and drop the file, or append the file through the ‘Choose File’ option.
  • Once the document is uploaded, you can edit it using the toolbar as you needed.
  • When the modification is done, press the ‘Download’ icon to save the file.

How to Edit Appendix C: Functional Cross-Reference Transition on Windows

Windows is the most widespread operating system. However, Windows does not contain any default application that can directly edit template. In this case, you can get CocoDoc's desktop software for Windows, which can help you to work on documents productively.

All you have to do is follow the guidelines below:

  • Get CocoDoc software from your Windows Store.
  • Open the software and then drag and drop your PDF document.
  • You can also drag and drop the PDF file from OneDrive.
  • After that, edit the document as you needed by using the varied tools on the top.
  • Once done, you can now save the customized form to your computer. You can also check more details about how to edit pdf in this page.

How to Edit Appendix C: Functional Cross-Reference Transition on Mac

macOS comes with a default feature - Preview, to open PDF files. Although Mac users can view PDF files and even mark text on it, it does not support editing. By using CocoDoc, you can edit your document on Mac easily.

Follow the effortless instructions below to start editing:

  • Firstly, install CocoDoc desktop app on your Mac computer.
  • Then, drag and drop your PDF file through the app.
  • You can attach the template from any cloud storage, such as Dropbox, Google Drive, or OneDrive.
  • Edit, fill and sign your paper by utilizing some online tools.
  • Lastly, download the template to save it on your device.

How to Edit PDF Appendix C: Functional Cross-Reference Transition via G Suite

G Suite is a widespread Google's suite of intelligent apps, which is designed to make your work faster and increase collaboration with each other. Integrating CocoDoc's PDF document editor with G Suite can help to accomplish work effectively.

Here are the guidelines to do it:

  • Open Google WorkPlace Marketplace on your laptop.
  • Seek for CocoDoc PDF Editor and download the add-on.
  • Attach the template that you want to edit and find CocoDoc PDF Editor by clicking "Open with" in Drive.
  • Edit and sign your paper using the toolbar.
  • Save the customized PDF file on your computer.

PDF Editor FAQ

Did General Grant and General Lee ride the same breed of horse?

No. As mentioned in other answers, General Grant rode Cincinnati, a Thoroughbred, and General Lee rode Traveller, an American Saddlebred.However, it is important to note that both of these breeds have changed drastically since originally used by those in the U.S. Cavalry and the Confederate Cavalry during the period of the Civil War (1861 - 1865). The breeds’ modern counterparts are quite different.While both Cincinnati and Traveller were also inbred with the blood of “the first great Thoroughbred stallion bred in America, sire of the fastest horses”, Sir Archy (out of Diomed, c. 1814-1831) - as most American Thoroughbreds were during that time period - both horses were quite different in appearance, build, and movement.First, however, let’s compare Cincinnati and Traveller as mounts.Above: Cincinnati, General Grant’s preferred mount. He was the son of Lexington, the fastest four-mile Thoroughbred in the United States (time 7:19.75 minutes). Cincinnati was also the grandson of the great Boston, who sired Lexington; with Boston sired by Timoleon, the son of Sir Archy.Above: Cincinnati’s sire, the Thoroughbred racehorse Lexington.Foaled in Kentucky, Lexington was bred and owned by Dr. Elisha Warfield. The colt was technically under lease to his trainer, Henry Brown, for his first two starts but raced in Dr. Warfield's colors as Brown, an African American ex-slave, could not enter horses at Lexington's Kentucky Association Race Track in his own right.[…] Lexington was arguably the best racehorse of his day, but his prowess as a racer was as nothing compared to his impact as a sire. His career straddled both the American Civil War and the transition between the highly stamina-oriented heat racing commonplace before the war, and the much more speed-oriented ‘dash’ racing that became the norm afterward.Lexington's progeny proved ideally suited to the new realities of racing, and Lexington became the most dominant sire ever seen in North America, leading the general sire list 16 times.While no records were kept of broodmare sire statistics until much later, Lexington was equally valuable as a broodmare sire, with the result that by the end of the 19th century, a heavy majority of Thoroughbreds produced in North American carried his blood, often through multiple crosses. His male line is now extinct, but his influence lives on in pedigrees around the world.A bay horse, Lexington stood 15.3 hands, and was considered extremely well-conformed, with powerful shoulders and hindquarters…Lexington was inbred 3 × 4 to the stallion Sir Archy, and 4 × 4 to the stallion Diomed, meaning that both horses appears twice in his pedigree—Sir Archy in the third and fourth generations, and Diomed twice in the fourth generation. (Source)He suffered from defective vision (his sire, Boston, had gone blind) and was losing his eyesight by the time he retired from racing as a 5-year-old; he became completely blind during his stud career. His head had a distinctive shape with a slightly bulging forehead, a slight dish at mid-profile and a convex nose, and he regularly transmitted this “moose head” profile to his descendants.During his stallion career, Lexington became known as the “Blind Hero of Woodburn” because of his prowess as a sire and his blindness.During his stud career, his disposition was said to rival his sire Boston's for willfulness and viciousness; as his dam Alice Carneal was also said to be quite high-strung, he was not bred for a placid temperament.Lexington led the American general sire list in 1861-1874, 1876 and 1878. His daughters were highly prized as broodmares, and formed effective crosses with the important sires Leamington (a four-time American champion sire) and Australian. Lexington's record as a sire of winners is all the more remarkable when it is considered that many of his offspring never raced, due to the disruptions caused by the American Civil War.During a February 1865 raid on Woodburn Farm, soldiers seized 15 of Woodburn's horses [for the Cavalry]…Lexington himself was not touched, being 15 years old and completely blind by that time. After that incident, Woodburn sent the bulk of his breeding stock - Lexington included - to safety in Illinois until the conclusion of the Civil War. (Source)Lexington's dominance in the pedigrees of American-bred Thoroughbreds, and the fact that the British Thoroughbred breeders considered him not a purebred, was a large factor in the so-called Jersey Act of 1913, in which the British Jockey Club limited the registration of horses not traced completely to horses in the General Stud Book. (Source)The loss of breeding records during the American Civil War, and the late beginning of the registration of American Thoroughbreds, led many in the British racing establishment to doubt that the American-bred horses were purebred…[the British prohibited] the registration of horses in the [stud] book, unless all of their ancestors had been registered, [and had records to prove it].[This was also compounded with the massive import of, and crossbreeding to, unregistered or undocumented Canadian horses of non-Thoroughbred descent for the Union Cavalry during the Civil War.]Descended from draft and light riding horses imported to Canada in the late 1600s, [the Canadian horse] was later crossed with other British and American breeds. During the 18th century (1700s), the Canadian horse spread throughout the northeastern US, where it contributed to the development of several horse breeds, [but had no established stud book].[…] Thousands of horses were exported [from Canada to the US] in the 19th century, many of whom were subsequently killed while acting as cavalry horses in the American Civil War. These exports decreased the purebred Canadian [horse] population almost to the point of extinction, prompting the formation of a stud book, and the passage of a law against further export [to the US]. (Source)Despite protests from American breeders, the regulation for [English Thoroughbred breeding] remained in force until 1949…Weatherby's further amended its regulations in 1969, introducing the word "thoroughbred" to describe the horses registered in previous volumes of the General Stud Book. (Source)Cincinnati was a gift to Grant from an admirer during the War, and - unlike his sire - had a “placid” manner, and “seldom batted an eyelid at anything”.[1] The horse was large at 17 hands (68 inches, 173 cm), handsome, and powerful, and he quickly became Grant's favorite. When Grant rode Cincinnati to negotiate Robert E. Lee's surrender at Appomattox Court House, the animal became immortalized.[2]“The name excited my father’s curiosity and he called at the hotel to meet the gentleman who told him that he had, he thought, the finest horse in the world, and knowing General Grant’s great liking for horses he had concluded, inasmuch as he would never be able to ride again, that he would like to give his horse to him; that he desired that the horse should have a good home and tender care and that the only condition that he would make in parting with him would be that the person receiving him would see that he was never ill-treated and should never fall into the hands of a person that would ill-treat him. This promise was given and General Grant accepted the horse, and called him ‘Cincinnati.’”Of Cincinnati, Ulysses Grant’s cypher operator Captain Samual H. Beckwith said: “It was not, however, until the Wilderness Campaign, that ‘Cincinnati’ became filled with the martial spirit and frantic to participate in the turmoil of battle. In quietude this famous animal seemed gentle and spiritless, but the battle sounds stirred him with enthusiasm. No artist could paint the beauty of this horse in the midst of action, when the curb was required to hold him back.”Based on photos, General Grant also used an English-style leather stockman's or hunting breastplate when riding Cincinnati, as well as a crupper, to keep his saddle secure. This shows a higher regard for safety and security measures on Grant’s part, especially if his saddle was to slip forwards or backwards on Cincinnati while riding.General Grant also used a McClellan-style saddle and a double bridle. The saddle blanket is also notably shaped like an eagle’s wings, similarly to the depicted Bald Eagle emblem stitched to it, the national symbol of the United States (Union). Overall, Grant’s tack shows a high degree of formality in appearance and equipment, as well as more stringent control and safety precautions.Speaking of McClellan, another Union officer, the McClellan saddle reflected the American (Union) fascination at the time with European cavalry practices and horse breeds.“In April 1855, six years before the start of the Civil War, Captain George B. McClellan sailed to Europe as part of a military commission to study developments in European tactics, weaponry, and logistics.McClellan's focus was the organization of engineer troops and cavalry. After the one-year tour, during which time McClellan observed several battles of the Crimean War, McClellan brought back almost 100 books and manuals. These he read before writing his report, which concluded with his proposed manual for American cavalry adapted from existing Russian cavalry regulations.He also proposed a cavalry saddle that he claimed was a modification of a Hungarian model used in the Prussian service. The saddle was almost certainly a modification of the Spanish tree saddle in common use in Mexico during this period, and which had become common in some parts of the US.The McClellan saddle was adopted by the US War Department in 1859 and remained standard issue, in various models, for the remaining history of the horse cavalry.” (Source)The Thoroughbred was developed in England during the 17th and 18th century, and experienced high popularity in the States at the time as well. The first Thoroughbred had been imported in 1730, and the breed would later gain recognition in 1868.Many casts and statues of Grant depict him riding Cincinnati - his Thoroughbred - and, in fact, very few others were allowed to even mount the famous horse. The only two people ever to ride Cincinnati other than Grant himself were a boyhood friend, Admiral Daniel Ammen, and President Abraham Lincoln.[3]The account of President Lincoln riding Cincinnati is as follows:“General Grant presented to Mr. Lincoln the officers of the staff who were present, and he had for each one a cordial greeting and a pleasant word. There was a kindliness in his tone and a hearty manner of expression which went far to captivate all who met him. The President soon stepped ashore, and after sitting awhile at headquarters mounted the large bay horse, Cincinnati, while the general rode with him on [his horse] Jeff Davis.Three of us of the staff accompanied them, and the scenes encountered in visiting both Butler’s and Meade’s commands were most interesting. Mr. Lincoln wore a very high black silk hat and black trousers and frockcoat. Like most men who had been brought up in the West, he had good command of a horse, but it must be acknowledged that, in appearance, he was not a very dashing rider.On this occasion, by the time he had reached the troops he was completely covered with dust, and the black color of his clothes had changed to Confederate gray. As he had no straps, his trousers gradually worked up above his ankles, and gave him the appearance of a country farmer riding into town wearing his Sunday clothes.A citizen on horseback is always an odd sight in the midst of a uniformed army, and the picture presented by the President bordered upon the grotesque. However, the troops were so lost in admiration of the man that the humorous aspect did not seem to strike them. The soldiers rapidly passed the word along the line that Uncle Abe had joined them, and cheers broke forth from all the commands, and enthusiastic shouts and even words of familiar greeting met him on all sides.”[4]Above: Traveller, General Lee’s preferred mount. He was a grey American Saddlebred (technically a Thoroughbred-Saddlebred cross) of 16 hands (64 inches, 163 cm), notable for speed, strength and courage in combat.Here, we also see General Lee using a traditional military hold on the reins, possibly for neck reining, as well as a single curb bit with a Western-style (“working style”) bridle. The reins are kept relaxed and somewhat loose, as for working horses, a relaxed rein allows the animal freedom to move over rough terrain. There is some slack in the reins, unless the rider needs to tell the horse to stop.Lee’s tack and appearance is far more casual and laid-back than that of General Grant. There is also less focus on tighter safety precautions, later to General Lee’s detriment.“The traditional cavalry hold has the rider place all four reins in the left hand; in this way, the right hand is left free to hold a sword, lance, or other weapon. The precise order of reins has varied from era to era, from country to country, and to suit specific circumstances of battle or pageantry. In every case, a great deal of precise control is needed to selectively engage the bradoon or curb independently.Less schooled troopers may ride while engaging only a single pair of reins for one bit, and allowing the reins for the second bit to bow and thereby apply only the passive effect of gravity on that bit…It demonstrates the horse's throughness, self-carriage, and obedience due to the fact that the rider has little control with the reins except to create flexion. When used, it can increase the difficulty of the movement.” (Source)Traveller, sired by notable racehorse Grey Eagle - a 16hh (hands high) Thoroughbred - and originally named Jeff Davis, was born to [American Saddlebred broodmare] Flora in 1857 near the Blue Sulphur Springs, in Greenbrier County, Virginia (now West Virginia) and was first owned and raised by James W. Johnston.Grey Eagle sired numerous successful racehorses and saddle horses, with “Saddlebred” referring to “saddle” horses, versus “Thoroughbred” for racehorses.Above: Traveller’s sire, the Thoroughbred stallion Grey Eagle.[Grey Eagle] was romantically beautiful, extremely fast, and the pride of Kentucky. Foaled in 1835, Grey Eagle helped establish Kentucky’s reputation as a premier source of great horses, but his influence on the American Thoroughbred went far beyond racing.[…] Grey Eagle was sired by Woodpecker, arguably the first top-flight racehorse foaled in Kentucky. His dam was the gray mare Ophelia, by Wheild Medley, out of a Sir Archy mare, and being a product of his times, Grey Eagle was inbred 3x3 to Sir Archy.[…] At the Oakland fall meet, Grey Eagle captured the patrons’ attention when setting an American record for two miles in a sweepstakes, going in 3:41 in the first heat, and cantering home in the second heat in 3:43 1/2. A week later he won another stakes at the same track and distance.Grey Eagle had grown into a strikingly handsome horse, a light gray with a silver mane and tail, of majestic carriage, graceful step, and the effortless stroke of a true four-mile horse. When his handlers uncovered him at the track, ladies were said to have swooned.[…] He went to stud in 1840 and stood at various farms around Kentucky until 1856, when he was sold to owners in Ohio. He was the second- leading sire in 1850 and 1851 by number of winners. Many of his daughters became excellent producers. His name appears often in American families (Zenyatta, Mizzen Mast, and Runaway Groom trace back to a daughter) and he spread his gray color far and wide.Several of his sons, such as Flying Dutchman and Bay Printer, made contributions to the Quarter Horse, and his speed was used in the formative years of the Standardbred. (Source)As a colt, Traveller took the first prize at the Lewisburg, Virginia fairs in 1859 and 1860. As an adult he was a sturdy horse, and 1,100 pounds (500 kg); was an iron gray in color, with black point coloration; and had a long mane and flowing tail.In the spring of 1861, a year before achieving fame as a Confederate general, Robert E. Lee was commanding a small force in western Virginia. The quartermaster of the 3rd Regiment, Wise Legion, Captain Joseph M. Broun, was directed to "purchase a good serviceable horse of the best Greenbrier stock for our use during the war”.Broun purchased the horse for $175 (approximately $4,545 in 2008) from Andrew Johnston's son, Captain James W. Johnston, and named him Greenbrier. Major Thomas L. Broun, Joseph's brother recalled that Greenbrier:“...was greatly admired in camp for his rapid, springy walk, his high spirit, bold carriage, and muscular strength. He needed neither whip nor spur, and would walk his five or six miles an hour over the rough mountain roads of Western Virginia with his rider sitting firmly in the saddle and holding him in check by a tight rein, such vim and eagerness did he manifest to go right ahead so soon as he was mounted.”— Major Thomas L. BrounGeneral Lee took a great fancy to the horse. He called him his "colt", and predicted to Broun that he would use it before the war was over. After Lee was transferred to South Carolina, Joseph Broun sold the horse to him for $200 in February 1862. Lee named the horse "Traveller".Lee described his horse in a letter in response to his wife's cousin, Markie Williams, who wished to paint a portrait of Traveller:“If I was an artist like you, I would draw a true picture of Traveller; representing his fine proportions, muscular figure, deep chest, short back, strong haunches, flat legs, small head, broad forehead, delicate ears, quick eye, small feet, and black mane and tail. Such a picture would inspire a poet, whose genius could then depict his worth, and describe his endurance of toil, hunger, thirst, heat and cold; and the dangers and suffering through which he has passed. He could dilate upon his sagacity and affection, and his invariable response to every wish of his rider. He might even imagine his thoughts through the long night-marches and days of the battle through which he has passed. But I am no artist Markie, and can therefore only say he is a Confederate gray.”— Robert E. Lee, letter to Markie WilliamsTraveller was a horse of great stamina, and was usually a good horse for an officer in battle, because he was difficult to frighten. He could sometimes become nervous and spirited, however. This, and Lee’s more “laid-back” style, would later contribute to Lee’s lesser attention to safety than Grant cause Lee to be injured.At the Second Battle of Bull Run, while General Lee was at the front reconnoitering, dismounted and holding Traveller by the bridle, the horse became frightened at some movement of the enemy and, plunging, pulled Lee down on a stump, breaking both of his hands. Lee went through the remainder of that campaign chiefly in an ambulance. When he rode on horseback, a courier rode in front, leading his horse.[5]Another souce also recounts the story as such:“Despite the strong association Lee shared with Traveller, he did not begin to regularly ride him until after the spring 1862 Peninsula campaign.From that point on, he was the general’s most-used mount, even after Traveller reared and threw his owner shortly after the Second Battle of Manassas. His hands badly damaged in the fall, Lee was unable to mount up again until the day of the Battle of Sharpsburg.Some of the most dramatic incidents involving Lee and Traveller occurred during the Overland campaign in 1864, when soldiers literally grabbed the horse’s reins to prevent their commander from personally leading attacks on six occasions between May 6 and May 12.” (Source: HistoryNet)Now, when it comes to examining the horses’ conformation - or, how they are bred and built - here is a photo of General Grant’s favorite three horses, from left to right: Egypt, Cincinnati, and Jeff Davis.As you can tell, out of the three, Cincinnati is the one that looks the “prettiest”. The other two - Egypt and Jeff Davis - have heavier heads and thicker builds, more like that you’d find in today’s American Standardbred - or harness-racer, then called a “roadster” in those days - than your typical Thoroughbred, or flat-racer. The Standardbred also began to first emerge as a breed during and after the Civil War-era (c. 1851 - 1875).This is because many Cavalry bloodlines at the time were greatly influenced by the previous standard American breed, the smaller and stockier Narragansett Pacer (c. early 1700’s), as well as the imported Norfolk Trotter from England (c. 1822), albeit later crossed with Thoroughbred blood for greater size, stamina, grace, and speed."They have, besides, a breed of small horses, which are extremely hardy. They pace naturally, though in no very graceful or easy manner; but with such swiftness, and for so long a continuance, as must appear almost incredible to those who have not experienced it…" - Edmund Burke on the Narragansett Pacer, c. 1757[...] Unlike a racehorse bred to produce quick, bursting speed over a flat course, the Narragansett Pacer was a relatively small horse, but bred and trained to move swiftly over rough terrain with tremendous endurance. As a pacer, it had a somewhat awkward high step, but it did not sway from side to side, and could carry a man 50 miles or more in a day.[…] The rider could spend hours in the saddle, even all day, and often did. Before roads were built, overland transportation consisted of following rough trails, pathways, and Indian traces. "Carriages were unknown," wrote one chronicler of 18th-century life in southern Rhode Island. "And the public roads were not so good...all the riding was done on horseback."The Narragansett Pacer had been used by the likes of Paul Revere and General George Washington as prized mounts by the Patriots during the American Revolutionary War, which I wrote about more in-depth on Reddit here.Likewise, Cincinnati’s hindquarters are built more like those of a classic American Quarter Horse - which the Thoroughbred “quarter-mile”, or sprint, racers served as a foundation for - while also retaining the build and stucture of an ideal Thoroughbred for hunter-jumper sports, as opposed to flat-racing.Above: A 1905 photo of “Thoroughbred” Peter McCue, a foundational sire of the American Quarter Horse. He was foaled 1895, and registered as a Thoroughbred, but is of disputed pedigree.At the time, due to how much more common crossbreeding was, Thoroughbreds also had more variation in types. As the Thoroughbred breed became established in America, many colonial non-Thoroughbred horses - primarily native mares crossed with Thoroughbred stallions - were included in the original American Thoroughbred stud books. These horses were used as a work horse during the week, and a race horse on the weekends, reflecting the American attitude and work-ethic.Today, the modern American Thoroughbred, which is largely used for racing only, has been purposefully bred to have a up to twice as large of a heart[6] and lighter bones[7],which have caused health issues in the breed, including heart problems, such as aortic and arrhythmia issues, and sudden cardiac death[8][9]. This is in contrast to the Civil War-era, when horses were bred not just for speed, but also for endurance, including the stresses and demands of everyday riding.During and after the Civil War-era, there was a shift in Thoroughbred racing from endurance to speed, resulting in the changes to the breed seen today. According to Bill Finley’s 2005 article for ESPN, “What’s wrong with the modern Thoroughbred”:“The brittleness of the modern horse is a problem that is getting worse all the time, and one for which racing seems to have no solutions. It's a reason why so many tracks have so many small fields that are unappetizing to bettors, and a reason why racing is losing its appeal with the general public.The sport has no stars anymore, because a true star has to have lasting appeal…It is widely assumed that horses, especially good horses, can't race more than five or six times a year, and routinely need six or seven weeks between races, because the breed is so fragile these days it cannot stand anything more punishing…The theory dismisses the chance that there is the possibility of a chicken-and-egg scenario here. Which came first–wimpy horses, or wimpy horse trainers? Did horses hold up much better 20, 30, 40 years ago because the breed was, in general, tougher and more durable or because trainers made them tougher and more durable? […] Turn back the clock to the forties, and horses were machines…[…] I don't know the first thing about training, but something tells me that a horse has to be mentally and physically battle-tested in order to hold up…This requires a high degree of fitness and the type of experience that can only come from racing [nowadays, as horses are no longer used in war]…”In 2016, veterinarian Dr. Sophy Jesty, who specializes in large animal cardiology, had this to say on the sudden cardiac arrest and death of modern Thoroughbred racehorse Homeboykris:Q: “When Homeboykris collapsed and died of a heart attack on this year's Preakness undercard, a lot of people worried that he had the heart attack because he had had a long race career. Is there any reason these two things should be related?”Jesty: “Homeboykris was nine years old, which is certainly old for a racehorse. Although there are no studies to my knowledge addressing the question of how age affects the risk of sudden death in horses, theoretically, advancing age could increase the risk. At some point, the heart and lungs no longer work as strongly and efficiently as when the horse was younger. As animals age, their arteries lose elasticity, which could increase the risk for aortic rupture during exercise. However, nine years old is not geriatric for a horse, and in many sport disciplines, nine years old would only be considered mid-career.” (Source)The results of a 2013 DNA study of modern English Thoroughbreds - and various other breeds influenced by the Thoroughbred - would also confirm much lower genetic variation in the breed than previous decades, due to the increasing impact of centuries of selective breeding, inbreeding, and line-breeding.“‘The results confirmed what we had previously suspected - that the Y chromosomes of modern breeds of horse show far less variability than those of other domestic animals.’..The senior author on the paper, Dr. Gottfried Brem, said: ‘Most modern breeds were established in the last two centuries, during which time the horse has undergone a transition from working and military use towards leisure and sports…The restricted genetic diversity of the modern horse Y chromosome is a reflection of what has survived the species' dynamic history.’”Today, the Appendix [Quarter] Horse - or half-Quarter Horse/half-Thoroughbred cross - would probably be most similar to Cincinnati. The Appendix also more closely resembles the mixed-breed status of the American Thoroughbred at the time of the Civil War.Above: A modern 17-hh bay Appendix [Quarter] Horse of Cincinnati’s color and build.Likewise, Traveller, General Lee’s mount, would be more similar to today’s Missouri Fox Trotter (below) than your typical American Saddlebred, the latter of which is now bred more for flashy looks and shows than practical function.The Fox Trotter, however, is the descendant of the more “practical-use” type classical Saddlebreds, a breed that, like the Thoroughbred, changed with the transition from “working and military” use to “leisure and sports”. New technology and the mechanization of military over time, gradually phasing out the Cavalry, was also a major factor.Above: A modern gray Missouri Fox Trotter of Traveller’s color and build.The Missouri Fox Trotter was developed from equine stock, including gaited horses, brought to Missouri by settlers from Tennessee, Kentucky and Virginia. Breeds that contributed to the Fox Trotter included the Arabian, Morgan, American Saddlebred, Tennessee Walking Horse and Standardbred.By the time of Missouri's statehood in 1821, the horses of the state were known for their unique gait, which was useful in the rocky terrain of the Ozark Mountains. The breed became popular with cattlemen for their smooth gaits and ability to work with cattle.[…] Missouri Fox Trotters are used extensively by trail riders, who appreciate their gaits, stamina and weight-carrying abilities. They are also used in handicapped riding programs, and their smooth gait has proven useful for riders with minor physical disabilities. Crosses between Fox Trotter mares and donkey jacks are often made, creating mules with the fox trot gait that are used to carry hunters and trail riders, especially in the western United States.The US Forest Service also employs Fox Trotters for their speed, stamina and gait, and members of the breed were used to make the first horse-back descent of the north rim of the Grand Canyon.[10]

What is a “history of C to C programming language”?

Click the link or see the content given below Learn C historyThe Development of the C Language*Dennis M. Ritchie (To get more on C , be connectedBell Labs/Lucent Technologies to this blog)Murray Hill, NJ 07974 USAABSTRACTThe C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today. This paper studies its evolution.IntroductionThis paper is about the development of the C programming language, the influences on it, and the conditions under which it was created. For the sake of brevity, I omit full descriptions of C itself, its parent B [Johnson 73] and its grandparent BCPL [Richards 79], and instead concentrate on characteristic elements of each language and how they evolved.C came into being in the years 1969-1973, in parallel with the early development of the Unix operating system; the most creative period occurred during 1972. Another spate of changes peaked between 1977 and 1979, when portability of the Unix system was being demonstrated. In the middle of this second period, the first widely available description of the language appeared: The C Programming Language, often called the `white book' or `K&R' [Kernighan 78]. Finally, in the middle 1980s, the language was officially standardized by the ANSI X3J11 committee, which made further changes. Until the early 1980s, although compilers existed for a variety of machine architectures and operating systems, the language was almost exclusively associated with Unix; more recently, its use has spread much more widely, and today it is among the languages most commonly used throughout the computer industry.History: the settingThe late 1960s were a turbulent era for computer systems research at Bell Telephone Laboratories [Ritchie 78] [Ritchie 84]. The company was pulling out of the Multics project [Organick 75], which had started as a joint venture of MIT, General Electric, and Bell Labs; by 1969, Bell Labs management, and even the researchers, came to believe that the promises of Multics could be fulfilled only too late and too expensively. Even before the GE-645 Multics machine was removed from the premises, an informal group, led primarily by Ken Thompson, had begun investigating alternatives.Thompson wanted to create a comfortable computing environment constructed according to his own design, using whatever means were available. His plans, it is evident in retrospect, incorporated many of the innovative aspects of Multics, including an explicit notion of a process as a locus of control, a tree-structured file system, a command interpreter as user-level program, simple representation of text files, and generalized access to devices. They excluded others, such as unified access to memory and to files. At the start, moreover, he and the rest of us deferred another pioneering (though not original) element of Multics, namely writing almost exclusively in a higher-level language. PL/I, the implementation language of Multics, was not much to our tastes, but we were also using other languages, including BCPL, and we regretted losing the advantages of writing programs in a language above the level of assembler, such as ease of writing and clarity of understanding. At the time we did not put much weight on portability; interest in this arose later.Thompson was faced with a hardware environment cramped and spartan even for the time: the DEC PDP-7 on which he started in 1968 was a machine with 8K 18-bit words of memory and no software useful to him. While wanting to use a higher-level language, he wrote the original Unix system in PDP-7 assembler. At the start, he did not even program on the PDP-7 itself, but instead used a set of macros for the GEMAP assembler on a GE-635 machine. A postprocessor generated a paper tape readable by the PDP-7.These tapes were carried from the GE machine to the PDP-7 for testing until a primitive Unix kernel, an editor, an assembler, a simple shell (command interpreter), and a few utilities (like the Unix rm, cat, cp commands) were completed. After this point, the operating system was self-supporting: programs could be written and tested without resort to paper tape, and development continued on the PDP-7 itself.Thompson's PDP-7 assembler outdid even DEC's in simplicity; it evaluated expressions and emitted the corresponding bits. There were no libraries, no loader or link editor: the entire source of a program was presented to the assembler, and the output file—with a fixed name—that emerged was directly executable. (This name, a.out, explains a bit of Unix etymology; it is the output of the assembler. Even after the system gained a linker and a means of specifying another name explicitly, it was retained as the default executable result of a compilation.)Not long after Unix first ran on the PDP-7, in 1969, Doug McIlroy created the new system's first higher-level language: an implementation of McClure's TMG [McClure 65]. TMG is a language for writing compilers (more generally, TransMoGrifiers) in a top-down, recursive-descent style that combines context-free syntax notation with procedural elements. McIlroy and Bob Morris had used TMG to write the early PL/I compiler for Multics.Challenged by McIlroy's feat in reproducing TMG, Thompson decided that Unix—possibly it had not even been named yet—needed a system programming language. After a rapidly scuttled attempt at Fortran, he created instead a language of his own, which he called B. B can be thought of as C without types; more accurately, it is BCPL squeezed into 8K bytes of memory and filtered through Thompson's brain. Its name most probably represents a contraction of BCPL, though an alternate theory holds that it derives from Bon [Thompson 69], an unrelated language created by Thompson during the Multics days. Bon in turn was named either after his wife Bonnie, or (according to an encyclopedia quotation in its manual), after a religion whose rituals involve the murmuring of magic formulas.Origins: the languagesBCPL was designed by Martin Richards in the mid-1960s while he was visiting MIT, and was used during the early 1970s for several interesting projects, among them the OS6 operating system at Oxford [Stoy 72], and parts of the seminal Alto work at Xerox PARC [Thacker 79]. We became familiar with it because the MIT CTSS system [Corbato 62] on which Richards worked was used for Multics development. The original BCPL compiler was transported both to Multics and to the GE-635 GECOS system by Rudd Canaday and others at Bell Labs [Canaday 69]; during the final throes of Multics's life at Bell Labs and immediately after, it was the language of choice among the group of people who would later become involved with Unix.BCPL, B, and C all fit firmly in the traditional procedural family typified by Fortran and Algol 60. They are particularly oriented towards system programming, are small and compactly described, and are amenable to translation by simple compilers. They are `close to the machine' in that the abstractions they introduce are readily grounded in the concrete data types and operations supplied by conventional computers, and they rely on library routines for input-output and other interactions with an operating system. With less success, they also use library procedures to specify interesting control constructs such as coroutines and procedure closures. At the same time, their abstractions lie at a sufficiently high level that, with care, portability between machines can be achieved.BCPL, B and C differ syntactically in many details, but broadly they are similar. Programs consist of a sequence of global declarations and function (procedure) declarations. Procedures can be nested in BCPL, but may not refer to non-static objects defined in containing procedures. B and C avoid this restriction by imposing a more severe one: no nested procedures at all. Each of the languages (except for earliest versions of B) recognizes separate compilation, and provides a means for including text from named files.Several syntactic and lexical mechanisms of BCPL are more elegant and regular than those of B and C. For example, BCPL's procedure and data declarations have a more uniform structure, and it supplies a more complete set of looping constructs. Although BCPL programs are notionally supplied from an undelimited stream of characters, clever rules allow most semicolons to be elided after statements that end on a line boundary. B and C omit this convenience, and end most statements with semicolons. In spite of the differences, most of the statements and operators of BCPL map directly into corresponding B and C.Some of the structural differences between BCPL and B stemmed from limitations on intermediate memory. For example, BCPL declarations may take the formlet P1 be command and P2 be command and P3 be command  ... where the program text represented by the commands contains whole procedures. The subdeclarations connected byandoccur simultaneously, so the nameP3is known inside procedureP1. Similarly, BCPL can package a group of declarations and statements into an expression that yields a value, for exampleE1 := valof ( declarations ; commands ; resultis E2 ) + 1 The BCPL compiler readily handled such constructs by storing and analyzing a parsed representation of the entire program in memory before producing output. Storage limitations on the B compiler demanded a one-pass technique in which output was generated as soon as possible, and the syntactic redesign that made this possible was carried forward into C.Certain less pleasant aspects of BCPL owed to its own technological problems and were consciously avoided in the design of B. For example, BCPL uses a `global vector' mechanism for communicating between separately compiled programs. In this scheme, the programmer explicitly associates the name of each externally visible procedure and data object with a numeric offset in the global vector; the linkage is accomplished in the compiled code by using these numeric offsets. B evaded this inconvenience initially by insisting that the entire program be presented all at once to the compiler. Later implementations of B, and all those of C, use a conventional linker to resolve external names occurring in files compiled separately, instead of placing the burden of assigning offsets on the programmer.Other fiddles in the transition from BCPL to B were introduced as a matter of taste, and some remain controversial, for example the decision to use the single character=for assignment instead of:=. Similarly, B uses/**/to enclose comments, where BCPL uses//, to ignore text up to the end of the line. The legacy of PL/I is evident here. (C++ has resurrected the BCPL comment convention.) Fortran influenced the syntax of declarations: B declarations begin with a specifier likeautoorstatic, followed by a list of names, and C not only followed this style but ornamented it by placing its type keywords at the start of declarations.Not every difference between the BCPL language documented in Richards's book [Richards 79] and B was deliberate; we started from an earlier version of BCPL [Richards 67]. For example, theendcasethat escapes from a BCPLswitchonstatement was not present in the language when we learned it in the 1960s, and so the overloading of thebreakkeyword to escape from the B and Cswitchstatement owes to divergent evolution rather than conscious change.In contrast to the pervasive syntax variation that occurred during the creation of B, the core semantic content of BCPL—its type structure and expression evaluation rules—remained intact. Both languages are typeless, or rather have a single data type, the `word,' or `cell,' a fixed-length bit pattern. Memory in these languages consists of a linear array of such cells, and the meaning of the contents of a cell depends on the operation applied. The+operator, for example, simply adds its operands using the machine's integer add instruction, and the other arithmetic operations are equally unconscious of the actual meaning of their operands. Because memory is a linear array, it is possible to interpret the value in a cell as an index in this array, and BCPL supplies an operator for this purpose. In the original language it was spelledrv, and later!, while B uses the unary*. Thus, ifpis a cell containing the index of (or address of, or pointer to) another cell,*prefers to the contents of the pointed-to cell, either as a value in an expression or as the target of an assignment.Because pointers in BCPL and B are merely integer indices in the memory array, arithmetic on them is meaningful: ifpis the address of a cell, thenp+1is the address of the next cell. This convention is the basis for the semantics of arrays in both languages. When in BCPL one writeslet V = vec 10 or in B,auto V[10]; the effect is the same: a cell namedVis allocated, then another group of 10 contiguous cells is set aside, and the memory index of the first of these is placed intoV. By a general rule, in B the expression*(V+i) addsVandi, and refers to thei-th location afterV. Both BCPL and B each add special notation to sweeten such array accesses; in B an equivalent expression isV[i] and in BCPLV!i This approach to arrays was unusual even at the time; C would later assimilate it in an even less conventional way.None of BCPL, B, or C supports character data strongly in the language; each treats strings much like vectors of integers and supplements general rules by a few conventions. In both BCPL and B a string literal denotes the address of a static area initialized with the characters of the string, packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and strings are terminated by a special character, which B spelled `*e'. This change was made partially to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in our experience, less convenient than using a terminator.Individual characters in a BCPL string were usually manipulated by spreading the string out into another array, one character per cell, and then repacking it later; B provided corresponding routines, but people more often used other library functions that accessed or replaced individual characters in a string.More HistoryAfter the TMG version of B was working, Thompson rewrote B in itself (a bootstrapping step). During development, he continually struggled against memory limitations: each language addition inflated the compiler so it could barely fit, but each rewrite taking advantage of the feature reduced its size. For example, B introduced generalized assignment operators, usingx=+yto addytox. The notation came from Algol 68 [Wijngaarden 75] via McIlroy, who had incorporated it into his version of TMG. (In B and early C, the operator was spelled=+instead of+=; this mistake, repaired in 1976, was induced by a seductively easy way of handling the first form in B's lexical analyzer.)Thompson went a step further by inventing the++and--operators, which increment or decrement; their prefix or postfix position determines whether the alteration occurs before or after noting the value of the operand. They were not in the earliest versions of B, but appeared along the way. People often guess that they were created to use the auto-increment and auto-decrement address modes provided by the DEC PDP-11 on which C and Unix first became popular. This is historically impossible, since there was no PDP-11 when B was developed. The PDP-7, however, did have a few `auto-increment' memory cells, with the property that an indirect memory reference through them incremented the cell. This feature probably suggested such operators to Thompson; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger motivation for the innovation was probably his observation that the translation of++xwas smaller than that ofx=x+1.The B compiler on the PDP-7 did not generate machine instructions, but instead `threaded code' [Bell 72], an interpretive scheme in which the compiler's output consists of a sequence of addresses of code fragments that perform the elementary operations. The operations typically—in particular for B—act on a simple stack machine.On the PDP-7 Unix system, only a few things were written in B except B itself, because the machine was too small and too slow to do more than experiment; rewriting the operating system and the utilities wholly into B was too expensive a step to seem feasible. At some point Thompson relieved the address-space crunch by offering a `virtual B' compiler that allowed the interpreted program to occupy more than 8K bytes by paging the code and data within the interpreter, but it was too slow to be practical for the common utilities. Still, some utilities written in B appeared, including an early version of the variable-precision calculator dc familiar to Unix users [McIlroy 79]. The most ambitious enterprise I undertook was a genuine cross-compiler that translated B to GE-635 machine instructions, not threaded code. It was a small tour de force: a full B compiler, written in its own language and generating code for a 36-bit mainframe, that ran on an 18-bit machine with 4K words of user address space. This project was possible only because of the simplicity of the B language and its run-time system.Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own.By 1970, the Unix project had shown enough promise that we were able to acquire the new DEC PDP-11. The processor was among the first of its line delivered by DEC, and three months passed before its disk arrived. Making B programs run on it using the threaded technique required only writing the code fragments for the operators, and a simple assembler which I coded in B; soon, dc became the first interesting program to be tested, before any operating system, on our PDP-11. Almost as rapidly, still waiting for the disk, Thompson recoded the Unix kernel and some basic commands in PDP-11 assembly language. Of the 24K bytes of memory on the machine, the earliest PDP-11 Unix system used 12K bytes for the operating system, a tiny space for user programs, and the remainder as a RAM disk. This version was only for testing, not for real work; the machine marked time by enumerating closed knight's tours on chess boards of various sizes. Once its disk appeared, we quickly migrated to it after transliterating assembly-language commands to the PDP-11 dialect, and porting those already in B.By 1971, our miniature computer center was beginning to have users. We all wanted to create interesting software more easily. Using assembler was dreary enough that B, despite its performance problems, had been supplemented by a small library of useful service routines and was being used for more and more new programs. Among the more notable results of this period was Steve Johnson's first version of the yacc parser-generator [Johnson 79a].The Problems of BThe machines on which we first used BCPL and then B were word-addressed, and these languages' single data type, the `cell,' comfortably equated with the hardware machine word. The advent of the PDP-11 exposed several inadequacies of B's semantic model. First, its character-handling mechanisms, inherited with few changes from BCPL, were clumsy: using library procedures to spread packed strings into individual cells and then repack, or to access and replace individual characters, began to feel awkward, even silly, on a byte-oriented machine.Second, although the original PDP-11 did not provide for floating-point arithmetic, the manufacturer promised that it would soon be available. Floating-point operations had been added to BCPL in our Multics and GCOS compilers by defining special operators, but the mechanism was possible only because on the relevant machines, a single word was large enough to contain a floating-point number; this was not true on the 16-bit PDP-11.Finally, the B and BCPL model implied overhead in dealing with pointers: the language rules, by defining a pointer as an index in an array of words, forced pointers to be represented as word indices. Each pointer reference generated a run-time scale conversion from the pointer to the byte address expected by the hardware.For all these reasons, it seemed that a typing scheme was necessary to cope with characters and byte addressing, and to prepare for the coming floating-point hardware. Other issues, particularly type safety and interface checking, did not seem as important then as they became later.Aside from the problems with the language itself, the B compiler's threaded-code technique yielded programs so much slower than their assembly-language counterparts that we discounted the possibility of recoding the operating system or its central utilities in B.In 1971 I began to extend the B language by adding a character type and also rewrote its compiler to generate PDP-11 machine instructions instead of threaded code. Thus the transition from B to C was contemporaneous with the creation of a compiler capable of producing programs fast and small enough to compete with assembly language. I called the slightly-extended language NB, for `new B.'Embryonic CNB existed so briefly that no full description of it was written. It supplied the typesintandchar, arrays of them, and pointers to them, declared in a style typified byint i, j; char c, d; int iarray[10]; int ipointer[]; char carray[10]; char cpointer[]; The semantics of arrays remained exactly as in B and BCPL: the declarations ofiarrayandcarraycreate cells dynamically initialized with a value pointing to the first of a sequence of 10 integers and characters respectively. The declarations foripointerandcpointeromit the size, to assert that no storage should be allocated automatically. Within procedures, the language's interpretation of the pointers was identical to that of the array variables: a pointer declaration created a cell differing from an array declaration only in that the programmer was expected to assign a referent, instead of letting the compiler allocate the space and initialize the cell.Values stored in the cells bound to array and pointer names were the machine addresses, measured in bytes, of the corresponding storage area. Therefore, indirection through a pointer implied no run-time overhead to scale the pointer from word to byte offset. On the other hand, the machine code for array subscripting and pointer arithmetic now depended on the type of the array or the pointer: to computeiarray[i]oripointer+iimplied scaling the addendiby the size of the object referred to.These semantics represented an easy transition from B, and I experimented with them for some months. Problems became evident when I tried to extend the type notation, especially to add structured (record) types. Structures, it seemed, should map in an intuitive way onto memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, nor any convenient way to arrange that it be initialized. For example, the directory entries of early Unix systems might be described in C asstruct {  int inumber;  char name[14]; }; I wanted the structure not merely to characterize an abstract object but also to describe a collection of bits that might be read from a directory. Where could the compiler hide the pointer tonamethat the semantics demanded? Even if structures were thought of more abstractly, and the space for pointers could be hidden somehow, how could I handle the technical problem of properly initializing these pointers when allocating a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth?The solution constituted the crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the creation of the pointer when the array name is mentioned in an expression. The rule, which survives in today's C, is that values of array type are converted, when they appear in expressions, into pointers to the first of the objects making up the array.This invention enabled most existing B code to continue to work, despite the underlying shift in the language's semantics. The few programs that assigned new values to an array name to adjust its origin—possible in B and BCPL, meaningless in C—were easily repaired. More important, the new language retained a coherent and workable (if unusual) explanation of the semantics of arrays, while opening the way to a more comprehensive type structure.The second innovation that most clearly distinguishes C from its predecessors is this fuller type structure and especially its expression in the syntax of declarations. NB offered the basic typesintandchar, together with arrays of them, and pointers to them, but no further ways of composition. Generalization was required: given an object of any type, it should be possible to describe a new object that gathers several into an array, yields it from a function, or is a pointer to it.For each object of such a composed type, there was already a way to mention the underlying object: index the array, call the function, use the indirection operator on the pointer. Analogical reasoning led to a declaration syntax for names mirroring that of the expression syntax in which the names typically appear. Thus,int i, *pi, **ppi; declare an integer, a pointer to an integer, a pointer to a pointer to an integer. The syntax of these declarations reflects the observation thati,*pi, and**ppiall yield aninttype when used in an expression. Similarly,int f(), *f(), (*f)(); declare a function returning an integer, a function returning a pointer to an integer, a pointer to a function returning an integer;int *api[10], (*pai)[10]; declare an array of pointers to integers, and a pointer to an array of integers. In all these cases the declaration of a variable resembles its usage in an expression whose type is the one named at the head of the declaration.The scheme of type composition adopted by C owes considerable debt to Algol 68, although it did not, perhaps, emerge in a form that Algol's adherents would approve of. The central notion I captured from Algol was a type structure based on atomic types (including structures), composed into arrays, pointers (references), and functions (procedures). Algol 68's concept of unions and casts also had an influence that appeared later.After creating the type system, the associated syntax, and the compiler for the new language, I felt that it deserved a new name; NB seemed insufficiently distinctive. I decided to follow the single-letter style and called it C, leaving open the question whether the name represented a progression through the alphabet or through the letters in BCPL.Neonatal CRapid changes continued after the language had been named, for example the introduction of the&&and||operators. In BCPL and B, the evaluation of expressions depends on context: withinifand other conditional statements that compare an expression's value with zero, these languages place a special interpretation on theand(&) andor(|) operators. In ordinary contexts, they operate bitwise, but in the B statementif (e1 & e2) ... the compiler must evaluatee1and if it is non-zero, evaluatee2, and if it too is non-zero, elaborate the statement dependent on theif. The requirement descends recursively on&and|operators withine1ande2. The short-circuit semantics of the Boolean operators in such `truth-value' context seemed desirable, but the overloading of the operators was difficult to explain and use. At the suggestion of Alan Snyder, I introduced the&&and||operators to make the mechanism more explicit.Their tardy introduction explains an infelicity of C's precedence rules. In B one writesif (a==b & c) ... to check whetheraequalsbandcis non-zero; in such a conditional expression it is better that&have lower precedence than==. In converting from B to C, one wants to replace&by&&in such a statement; to make the conversion less painful, we decided to keep the precedence of the&operator the same relative to==, and merely split the precedence of&&slightly from&. Today, it seems that it would have been preferable to move the relative precedences of&and==, and thereby simplify a common C idiom: to test a masked value against another value, one must writeif ((a&mask) == b) ... where the inner parentheses are required but easily forgotten.Many other changes occurred around 1972-3, but the most important was the introduction of the preprocessor, partly at the urging of Alan Snyder [Snyder 74], but also in recognition of the utility of the the file-inclusion mechanisms available in BCPL and PL/I. Its original version was exceedingly simple, and provided only included files and simple string replacements:#includeand#defineof parameterless macros. Soon thereafter, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation. The preprocessor was originally considered an optional adjunct to the language itself. Indeed, for some years, it was not even invoked unless the source program contained a special signal at its beginning. This attitude persisted, and explains both the incomplete integration of the syntax of the preprocessor with the rest of the language and the imprecision of its description in early reference manuals.PortabilityBy early 1973, the essentials of modern C were complete. The language and compiler were strong enough to permit us to rewrite the Unix kernel for the PDP-11 in C during the summer of that year. (Thompson had made a brief attempt to produce a system coded in an early version of C—before structures—in 1972, but gave up the effort.) Also during this period, the compiler was retargeted to other nearby machines, particularly the Honeywell 635 and IBM 360/370; because the language could not live in isolation, the prototypes for the modern libraries were developed. In particular, Lesk wrote a `portable I/O package' [Lesk 72] that was later reworked to become the C `standard I/O' routines. In 1978 Brian Kernighan and I published The C Programming Language [Kernighan 78]. Although it did not describe some additions that soon became common, this book served as the language reference until a formal standard was adopted more than ten years later. Although we worked closely together on this book, there was a clear division of labor: Kernighan wrote almost all the expository material, while I was responsible for the appendix containing the reference manual and the chapter on interfacing with the Unix system.During 1973-1980, the language grew a bit: the type structure gained unsigned, long, union, and enumeration types, and structures became nearly first-class objects (lacking only a notation for literals). Equally important developments appeared in its environment and the accompanying technology. Writing the Unix kernel in C had given us enough confidence in the language's usefulness and efficiency that we began to recode the system's utilities and tools as well, and then to move the most interesting among them to the other platforms. As described in [Johnson 78a], we discovered that the hardest problems in propagating Unix tools lay not in the interaction of the C language with new hardware, but in adapting to the existing software of other operating systems. Thus Steve Johnson began to work on pcc, a C compiler intended to be easy to retarget to new machines [Johnson 78b], while he, Thompson, and I began to move the Unix system itself to the Interdata 8/32 computer.The language changes during this period, especially around 1977, were largely focused on considerations of portability and type safety, in an effort to cope with the problems we foresaw and observed in moving a considerable body of code to the new Interdata platform. C at that time still manifested strong signs of its typeless origins. Pointers, for example, were barely distinguished from integral memory indices in early language manuals or extant code; the similarity of the arithmetic properties of character pointers and unsigned integers made it hard to resist the temptation to identify them. Theunsignedtypes were added to make unsigned arithmetic available without confusing it with pointer manipulation. Similarly, the early language condoned assignments between integers and pointers, but this practice began to be discouraged; a notation for type conversions (called `casts' from the example of Algol 68) was invented to specify type conversions more explicitly. Beguiled by the example of PL/I, early C did not tie structure pointers firmly to the structures they pointed to, and permitted programmers to writepointer->memberalmost without regard to the type ofpointer; such an expression was taken uncritically as a reference to a region of memory designated by the pointer, while the member name specified only an offset and a type.Although the first edition of K&R described most of the rules that brought C's type structure to its present form, many programs written in the older, more relaxed style persisted, and so did compilers that tolerated it. To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.Growth in UsageThe success of our portability experiment on the Interdata 8/32 soon led to another by Tom London and John Reiser on the DEC VAX 11/780. This machine became much more popular than the Interdata, and Unix and the C language began to spread rapidly, both within AT&T and outside. Although by the middle 1970s Unix was in use by a variety of projects within the Bell System as well as a small group of research-oriented industrial, academic, and government organizations outside our company, its real growth began only after portability had been achieved. Of particular note were the System III and System V versions of the system from the emerging Computer Systems division of AT&T, based on work by the company's development and research groups, and the BSD series of releases by the University of California at Berkeley that derived from research organizations in Bell Laboratories.During the 1980s the use of the C language spread widely, and compilers became available on nearly every machine architecture and operating system; in particular it became popular as a programming tool for personal computers, both for manufacturers of commercial software for these machines, and for end-users interested in programming. At the start of the decade, nearly every compiler was based on Johnson's pcc; by 1985 there were many independently-produced compiler products.StandardizationBy 1982 it was clear that C needed formal standardization. The best approximation to a standard, the first edition of K&R, no longer described the language in actual use; in particular, it mentioned neither thevoidorenumtypes. While it foreshadowed the newer approach to structures, only after it was published did the language support assigning them, passing them to and from functions, and associating the names of members firmly with the structure or union containing them. Although compilers distributed by AT&T incorporated these changes, and most of the purveyors of compilers not based on pcc quickly picked up them up, there remained no complete, authoritative description of the language.The first edition of K&R was also insufficiently precise on many details of the language, and it became increasingly impractical to regard pcc as a `reference compiler;' it did not perfectly embody even the language described by K&R, let alone subsequent extensions. Finally, the incipient use of C in projects subject to commercial and government contract meant that the imprimatur of an official standard was important. Thus (at the urging of M. D. McIlroy), ANSI established the X3J11 committee under the direction of CBEMA in the summer of 1983, with the goal of producing a C standard. X3J11 produced its report [ANSI 89] at the end of 1989, and subsequently this standard was accepted by ISO as ISO/IEC 9899-1990.From the beginning, the X3J11 committee took a cautious, conservative view of language extensions. Much to my satisfaction, they took seriously their goal: `to develop a clear, consistent, and unambiguous Standard for the C programming language which codifies the common, existing definition of C and which promotes the portability of user programs across C language environments.' [ANSI 89] The committee realized that mere promulgation of a standard does not make the world change.X3J11 introduced only one genuinely important change to the language itself: it incorporated the types of formal arguments in the type signature of a function, using syntax borrowed from C++ [Stroustrup 86]. In the old style, external functions were declared like this:double sin(); which says only thatsinis a function returning adouble(that is, double-precision floating-point) value. In the new style, this better rendereddouble sin(double); to make the argument type explicit and thus encourage better type checking and appropriate conversion. Even this addition, though it produced a noticeably better language, caused difficulties. The committee justifiably felt that simply outlawing `old-style' function definitions and declarations was not feasible, yet also agreed that the new forms were better. The inevitable compromise was as good as it could have been, though the language definition is complicated by permitting both forms, and writers of portable software must contend with compilers not yet brought up to standard.X3J11 also introduced a host of smaller additions and adjustments, for example, the type qualifiersconstandvolatile, and slightly different type promotion rules. Nevertheless, the standardization process did not change the character of the language. In particular, the C standard did not attempt to specify formally the language semantics, and so there can be dispute over fine points; nevertheless, it successfully accounted for changes in usage since the original description, and is sufficiently precise to base implementations on it.Thus the core C language escaped nearly unscathed from the standardization process, and the Standard emerged more as a better, careful codification than a new invention. More important changes took place in the language's surroundings: the preprocessor and the library. The preprocessor performs macro substitution, using conventions distinct from the rest of the language. Its interaction with the compiler had never been well-described, and X3J11 attempted to remedy the situation. The result is noticeably better than the explanation in the first edition of K&R; besides being more comprehensive, it provides operations, like token concatenation, previously available only by accidents of implementation.X3J11 correctly believed that a full and careful description of a standard C library was as important as its work on the language itself. The C language itself does not provide for input-output or any other interaction with the outside world, and thus depends on a set of standard procedures. At the time of publication of K&R, C was thought of mainly as the system programming language of Unix; although we provided examples of library routines intended to be readily transportable to other operating systems, underlying support from Unix was implicitly understood. Thus, the X3J11 committee spent much of its time designing and documenting a set of library routines required to be available in all conforming implementations.By the rules of the standards process, the current activity of the X3J11 committee is confined to issuing interpretations on the existing standard. However, an informal group originally convened by Rex Jaeschke as NCEG (Numerical C Extensions Group) has been officially accepted as subgroup X3J11.1, and they continue to consider extensions to C. As the name implies, many of these possible extensions are intended to make the language more suitable for numerical use: for example, multi-dimensional arrays whose bounds are dynamically determined, incorporation of facilities for dealing with IEEE arithmetic, and making the language more effective on machines with vector or other advanced architectural features. Not all the possible extensions are specifically numerical; they include a notation for structure literals.SuccessorsC and even B have several direct descendants, though they do not rival Pascal in generating progeny. One side branch developed early. When Steve Johnson visited the University of Waterloo on sabbatical in 1972, he brought B with him. It became popular on the Honeywell machines there, and later spawned Eh and Zed (the Canadian answers to `what follows B?'). When Johnson returned to Bell Labs in 1973, he was disconcerted to find that the language whose seeds he brought to Canada had evolved back home; even his own yacc program had been rewritten in C, by Alan Snyder.More recent descendants of C proper include Concurrent C [Gehani 89], Objective C [Cox 86], C* [Thinking 90], and especially C++ [Stroustrup 86]. The language is also widely used as an intermediate representation (essentially, as a portable assembly language) for a wide variety of compilers, both for direct descendents like C++, and independent languages like Modula 3 [Nelson 91] and Eiffel [Meyer 88].CritiqueTwo ideas are most characteristic of C among languages of its class: the relationship between arrays and pointers, and the way in which declaration syntax mimics expression syntax. They are also among its most frequently criticized features, and often serve as stumbling blocks to the beginner. In both cases, historical accidents or mistakes have exacerbated their difficulty. The most important of these has been the tolerance of C compilers to errors in type. As should be clear from the history above, C evolved from typeless languages. It did not suddenly appear to its earliest users and developers as an entirely new language with its own rules; instead we continually had to adapt existing programs as the language developed, and make allowance for an existing body of code. (Later, the ANSI X3J11 committee standardizing C would face the same problem.)Compilers in 1977, and even well after, did not complain about usages such as assigning between integers and pointers or using objects of the wrong type to refer to structure members. Although the language definition presented in the first edition of K&R was reasonably (though not completely) coherent in its treatment of type rules, that book admitted that existing compilers didn't enforce them. Moreover, some rules designed to ease early transitions contributed to later confusion. For example, the empty square brackets in the function declarationint f(a) int a[]; { ... } are a living fossil, a remnant of NB's way of declaring a pointer;ais, in this special case only, interpreted in C as a pointer. The notation survived in part for the sake of compatibility, in part under the rationalization that it would allow programmers to communicate to their readers an intent to passfa pointer generated from an array, rather than a reference to a single integer. Unfortunately, it serves as much to confuse the learner as to alert the reader.In K&R C, supplying arguments of the proper type to a function call was the responsibility of the programmer, and the extant compilers did not check for type agreement. The failure of the original language to include argument types in the type signature of a function was a significant weakness, indeed the one that required the X3J11 committee's boldest and most painful innovation to repair. The early design is explained (if not justified) by my avoidance of technological problems, especially cross-checking between separately-compiled source files, and my incomplete assimilation of the implications of moving between an untyped to a typed language. The lint program, mentioned above, tried to alleviate the problem: among its other functions, lint checks the consistency and coherency of a whole program by scanning a set of source files, comparing the types of function arguments used in calls with those in their definitions.An accident of syntax contributed to the perceived complexity of the language. The indirection operator, spelled*in C, is syntactically a unary prefix operator, just as in BCPL and B. This works well in simple expressions, but in more complex cases, parentheses are required to direct the parsing. For example, to distinguish indirection through the value returned by a function from calling a function designated by a pointer, one writes*fp()and(*pf)()respectively. The style used in expressions carries through to declarations, so the names might be declaredint *fp(); int (*pf)(); In more ornate but still realistic cases, things become worse:int *(*pfp)(); is a pointer to a function returning a pointer to an integer. There are two effects occurring. Most important, C has a relatively rich set of ways of describing types (compared, say, with Pascal). Declarations in languages as expressive as C—Algol 68, for example—describe objects equally hard to understand, simply because the objects themselves are complex. A second effect owes to details of the syntax. Declarations in C must be read in an `inside-out' style that many find difficult to grasp [Anderson 80]. Sethi [Sethi 81] observed that many of the nested declarations and expressions would become simpler if the indirection operator had been taken as a postfix operator instead of prefix, but by then it was too late to change.In spite of its difficulties, I believe that the C's approach to declarations remains plausible, and am comfortable with it; it is a useful unifying principle.The other characteristic feature of C, its treatment of arrays, is more suspect on practical grounds, though it also has real virtues. Although the relationship between pointers and arrays is unusual, it can be learned. Moreover, the language shows considerable power to describe important concepts, for example, vectors whose length varies at run time, with only a few basic rules and conventions. In particular, character strings are handled by the same mechanisms as any other array, plus the convention that a null character terminates a string. It is interesting to compare C's approach with that of two nearly contemporaneous languages, Algol 68 and Pascal [Jensen 74]. Arrays in Algol 68 either have fixed bounds, or are `flexible:' considerable mechanism is required both in the language definition, and in compilers, to accommodate flexible arrays (and not all compilers fully implement them.) Original Pascal had only fixed-sized arrays and strings, and this proved confining [Kernighan 81]. Later, this was partially fixed, though the resulting language is not yet universally available.C treats strings as arrays of characters conventionally terminated by a marker. Aside from one special rule about initialization by string literals, the semantics of strings are fully subsumed by more general rules governing all arrays, and as a result the language is simpler to describe and to translate than one incorporating the string as a unique data type. Some costs accrue from its approach: certain string operations are more expensive than in other designs because application code or a library routine must occasionally search for the end of a string, because few built-in operations are available, and because the burden of storage management for strings falls more heavily on the user. Nevertheless, C's approach to strings works well.On the other hand, C's treatment of arrays in general (not just strings) has unfortunate implications both for optimization and for future extensions. The prevalence of pointers in C programs, whether those declared explicitly or arising from arrays, means that optimizers must be cautious, and must use careful dataflow techniques to achieve good results. Sophisticated compilers can understand what most pointers can possibly change, but some important usages remain difficult to analyze. For example, functions with pointer arguments derived from arrays are hard to compile into efficient code on vector machines, because it is seldom possible to determine that one argument pointer does not overlap data also referred to by another argument, or accessible externally. More fundamentally, the definition of C so specifically describes the semantics of arrays that changes or extensions treating arrays as more primitive objects, and permitting operations on them as wholes, become hard to fit into the existing language. Even extensions to permit the declaration and use of multidimensional arrays whose size is determined dynamically are not entirely straightforward [MacDonald 89] [Ritchie 90], although they would make it much easier to write numerical libraries in C. Thus, C covers the most important uses of strings and arrays arising in practice by a uniform and simple mechanism, but leaves problems for highly efficient implementations and for extensions.Many smaller infelicities exist in the language and its description besides those discussed above, of course. There are also general criticisms to be lodged that transcend detailed points. Chief among these is that the language and its generally-expected environment provide little help for writing very large systems. The naming structure provides only two main levels, `external' (visible everywhere) and `internal' (within a single procedure). An intermediate level of visibility (within a single file of data and procedures) is weakly tied to the language definition. Thus, there is little direct support for modularization, and project designers are forced to create their own conventions.Similarly, C itself provides two durations of storage: `automatic' objects that exist while control resides in or below a procedure, and `static,' existing throughout execution of a program. Off-stack, dynamically-allocated storage is provided only by a library routine and the burden of managing it is placed on the programmer: C is hostile to automatic garbage collection.When the C language got Success?C has become successful to an extent far surpassing any early expectations. What qualities contributed to its widespread use?Doubtless the success of Unix itself was the most important factor; it made the language available to hundreds of thousands of people. Conversely, of course, Unix's use of C and its consequent portability to a wide variety of machines was important in the system's success. But the language's invasion of other environments suggests more fundamental merits.Despite some aspects mysterious to the beginner and occasionally even to the adept, C remains a simple and small language, translatable with simple and small compilers. Its types and operations are well-grounded in those provided by real machines, and for people used to how computers work, learning the idioms for generating time- and space-efficient programs is not difficult. At the same time the language is sufficiently abstracted from machine details that program portability can be achieved.Equally important, C and its central library support always remained in touch with a real environment. It was not designed in isolation to prove a point, or to serve as an example, but as a tool to write programs that did useful things; it was always meant to interact with a larger operating system, and was regarded as a tool to build larger tools. A parsimonious, pragmatic approach influenced the things that went into C: it covers the essential needs of many programmers, but does not try to supply too much.Finally, despite the changes that it has undergone since its first published description, which was admittedly informal and incomplete, the actual C language as seen by millions of users using many different compilers has remained remarkably stable and unified compared to those of similarly widespread currency, for example Pascal and Fortran. There are differing dialects of C—most noticeably, those described by the older K&R and the newer Standard C—but on the whole, C has remained freer of proprietary extensions than other languages. Perhaps the most significant extensions are the `far' and `near' pointer qualifications intended to deal with peculiarities of some Intel processors. Although C was not originally designed with portability as a prime goal, it succeeded in expressing programs, even including operating systems, on machines ranging from the smallest personal computers through the mightiest supercomputers.C is quirky, flawed, and an enormous success. While accidents of history surely helped, it evidently satisfied a need for a system implementation language efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.By:- Ashwini kumar vishwakarma.

What is transition loss and Carnot loss in a PA fan?

CAUTION: LONG ANSWER AHEAD!!!!FOR MORE DETAILED WAY OF SHOWING THE CONTENT, YOU CAN NAVIGATE TO THE LINK AT THE BOTTOM OF THIS ANSWER, OR YOU CAN READ MY EXPLANATION.1. Air FlowThe flow of air or any other fluid is caused by a pressure differential between two points. The flow will originate from an area of high energy, or pressure, and proceed to the area(s) of lower energy or pressure.Duct air moves according to three fundamental laws of physics: conservation of mass, conservation of energy, and conservation of momentum.Conservation of mass simply states that an air mass is neither created nor destroyed. From this principle, it follows that the amount of air mass coming into a junction in a ductwork system is equal to the amount of air mass leaving the junction, or the sum of air masses at each junction is equal to zero. In most cases, the air in a duct is assumed to be incompressible, an assumption that overlooks the change of air density that occurs as a result of pressure loss and flow in the ductwork. In ductwork, the law of conservation of mass means a duct size can be recalculated for a new air velocity using the simple equation:V2 = (V1 * A1)/A2Where V is velocity and A is AreaThe law of energy conservation states that energy cannot disappear; it is only converted from one form to another. This is the basis of one of the main expressions of aerodynamics, the Bernoulli equation. Bernoulli's equation in its simple form shows that, for an elemental flow stream, the difference in total pressures between any two points in a duct is equal to the pressure loss between these points, or:(Pressure loss)1-2 = (Total pressure)1 - (Total pressure)2Conservation of momentum is based onNewton's law that a body will maintain its state of rest or uniform motion unless compelled by another force to change that state. This law is useful to explain flow behavior in a duct system's fitting.1.1. Types of FlowLaminar FlowFlow parallel to a boundary layer. In HVAC system the plenum is a duct.Turbulent FlowFlow which is perpendicular and near the center of the duct and parallel near the outer edges of the duct.Most HVAC applications fall in the transition range between laminar and turbulent flow.1.2. Types of Pressure Losses or Resistance to FlowPressure loss is the loss of total pressure in a duct or fitting. There are three important observations that describe the benefits of using total pressure for duct calculation and testing rather than using only static pressure.· Only total pressure in ductwork always drops in the direction of flow. Static or dynamic pressures alone do not follow this rule.· The measurement of the energy level in an air stream is uniquely represented by total pressure only. The pressure losses in a duct are represented by the combined potential and kinetic energy transformation, i.e., the loss of total pressure.· The fan energy increases both static and dynamic pressure. Fan ratings based only on static pressure are partial, but commonly used.Pressure loss in ductwork has three components, frictional losses along duct walls and dynamic losses in fittings and component losses in duct-mounted equipment.Component PressureDue to physical items with known pressure drops, such as hoods, filters, louvers or dampers.Dynamic PressureDynamic losses are the result of changes in direction and velocity of air flow. Dynamic losses occur whenever an air stream makes turns, diverges, converges, narrows, widens, enters, exits, or passes dampers, gates, orifices, coils, filters, or sound attenuators. Velocity profiles are reorganized at these places by the development of vortexes that cause the transformation of mechanical energy into heat. The disturbance of the velocity profile starts at some distance before the air reaches a fitting. The straightening of a flow stream ends some distance after the air passes the fitting. This distance is usually assumed to be no shorter then six duct diameters for a straight duct. Dynamic losses are proportional to dynamic pressure and can be calculated using the equation:Dynamic loss = (Local loss coefficient) * (Dynamic pressure)where the Local loss coefficient, known as a C-coefficient, represents flow disturbances for particular fittings or for duct-mounted equipment as a function of their type and ratio of dimensions. Coefficients can be found in the ASHRAE Fittings diagrams.A local loss coefficient can be related to different velocities; it is important to know which part of the velocity profile is relevant. The relevant part of the velocity profile is usually the highest velocity in a narrow part of a fitting cross section or a straight/branch section in a junction.Frictional PressureFrictional losses in duct sections are result from air viscosity and momentum exchange among particles moving with different velocities. These losses also contribute negligible losses or gains in air systems unless there are extremely long duct runs or there are significant sections using flex duct.The easiest way of defining frictional loss per unit length is by using the Friction Chart (ASHRAE, 1997); however, this chart (shown below) should be used for elevations no higher of 500 m (1,600 ft), air temperature between 5°C and 40°C (40°F and 100°F), and ducts with smooth surfaces. The Darcy-Weisbach Equation should be used for “non-standard” duct type such as flex duct.Friction Chart(ASHRAE HANDBOOK, 1997)1.3. Total Pressure, Velocity Pressure, and Static PressureIt is convenient to calculate pressures in ducts using as a base an atmospheric pressure of zero. Mostly positive pressures occur in supply ducts and negative pressures occur in exhaust/return ducts; however, there are cases when negative pressures occur in a supply duct as a result of fitting effects.Airflow through a duct system creates three types of pressures: static, dynamic (velocity), and total. Each of these pressures can be measured. Air conveyed by a duct system imposes both static and dynamic (velocity) pressures on the duct's structure. The static pressure is responsible for much of the force on the duct walls. However, dynamic (velocity) pressure introduces a rapidly pulsating load.Static pressureStatic pressure is the measure of the potential energy of a unit of air in the particular cross section of a duct. Air pressure on the duct wall is considered static. Imagine a fan blowing into a completely closed duct; it will create only static pressure because there is no air flow through the duct. A balloon blown up with air is a similar case in which there is only static pressure.Dynamic (velocity) pressureDynamic pressure is the kinetic energy of a unit of air flow in an air stream. Dynamic pressure is a function of both air velocity and density:Dynamic pressure = (Density) * (Velocity)2 / 2The static and dynamic pressures are mutually convertible; the magnitude of each is dependent on the local duct cross section, which determines the flow velocity.Total PressureConsists of the pressure the air exerts in the direction of flow (Velocity Pressure) plus the pressure air exerts perpendicular to the plenum or container through which the air moves. In other words:PT = PV + PSPT = Total PressurePV = Velocity PressurePS = Static PressureThis general rule is used to derive what is called the Fan Total Pressure. See the section entitled Fan Performance Specifications for a definition of Fan Total Pressure and Fan Static Pressure.2. Air SystemsFor kitchen ventilation applications an air system consists of hood(s), duct work, and fan(s). The relationship between the air flow rate (CFM) and the pressure of an air system is expressed as an increasing exponential function. The graph below shows an example of a system curve. This curve shows the relationship between the air flow rate and the pressure of an air system.Complex systems with branches and junctions, duct size changes, and other variations can be broken into sections or sub-systems. Each section or sub-system has its own system curve. See the diagram below for an illustration of this concept.2.1. Fan LawsUse the Fan Laws along a system curve. If you know one (CFM, S.P.) point of a system you could use Fan Law 2 to determine the static pressure for other flow rates. They apply to a fixed air system. Once any element of the system changes, duct size, hood length, riser size, etc.. the system curve changes.Other calculations can be utilized to maneuver around a fan performance curve. For example, to calculate BHP from motor amp draw, use the following formula:1 phase motors 3 phase motorsBHP = V * I * E * PF BHP = V * I * E * PF * 1.73746 746where:BHP = Brake HorsepowerV = Line VoltageI = Line CurrentE = Motor Efficiency (Usually about .85 to .9)PF = Motor Power Factor (Usually about .9)Once the BHP is known, the RPM of the fan can be measured. The motor BHP and fan RPM can then be matched on the fan performance curve to approximate airflow.2.2. Air DensityThe most common influences on air density are the effects of temperature other than 70 °F and barometric pressures other than 29.92” caused by elevations above sea level.Ratings found in fan performance tables and curves are based on standard air. Standard air is defined as clean, dry air with a density of 0.075 pounds per cubic foot, with the barometric pressure at sea level of 29.92 inches of mercury and a temperature of 70 °F. Selecting a fan to operate at conditions other then standard air requires adjustment to both static pressure and brake horsepower. The volume of air will not be affected in a given system because a fan will move the same amount of air regardless of the air density. In other words, if a fan will move 3,000 cfm at 70 °F it will also move 3,000 CFM at 250 °F. Since 250 °F air weighs only 34% of 70°F air, the fan will require less BHP but it will also create less pressure than specified.When a fan is specified for a given CFM and static pressure at conditions other than standard, the correction factors (shown in table below) must be applied in order to select the proper size fan, fan speed and BHP to meet the new condition.The best way to understand how the correction factors are used is to work out several examples. Let’s look at an example using a specification for a fan to operate at 600°F at sea level. This example will clearly show that the fan must be selected to handle a much greater static pressure than specified.Example #1: A 20” centrifugal fan is required to deliver 5,000 cfm at 3.0 inches static pressure. Elevation is 0 (sea level). Temperature is 600°F. At standard conditions, the fan will require 6.76 bhp1. Using the chart below, the correction factor is 2.00.2. Multiply the specified operating static pressure by the correction factor to determine the standard air density equivalent static pressure. (Corrected static pressure = 3.0 x 2.00 = 6”. The fan must be selected for 6 inches of static pressure.)3. Based upon the performance table for a 20 fan at 5,000 cfm at 6 inches wg, 2,018 rpm is needed to produce the required performance.4. What is the operating bhp at 600 °F?Since the horsepower shown in the performance chart refers to standard air density, this should be corrected to reflect actual bhp at the lighter operating air.Operating bhp = standard bhp ¸ 2.00 or 6.76 ¸ 2.00 = 3.38 bhp.2.3. System ConstantEvery air system or sub-system has a system constant. This constant can be calculated as long as you know one (CFM, Static Pressure) point. You use a variation of the fan laws to calculate the system constant. To calculate the system constant:K system = S.P./(CFM)2Once you have the system constant you can calculate the static pressure for any flow rate.S.P. = (CFM)2 * K system3. Pressure Losses of an Air SystemPressure losses are more easily determined by breaking an air system into sections. Sections can be in series or in parallel.3.1. Sections in SeriesFor sections or components in series simply sum up all the sections. A single duct that has the same shape, cross section, and mass flow is called a duct section or just a section.Following is the recommended procedure for calculating total pressure loss in a single duct section:· Gather input data: air flow, duct shape, duct size, roughness, altitude, air temperature, and fittings;· Calculate air velocity as a function of air flow and cross section;· Calculate local C-coefficients for each fitting used; and· Calculate pressure loss using the friction chartThe following is a simple example of how duct pressure accumulates and is totaled in a section.3.2. Sections in ParallelWhen designing sections that are parallel it is important to remember that the branches of a junction all have the same total pressure. This is a fact. It is governed by a principle which states that areas of high energy move to areas of lower energy. We will see how this applies to air systems in parallel.To illustrate these concepts we will reference the diagram below. In this example we calculate the pressure losses for Section 1 to be -0.75” at the junction. We calculate the pressure losses for Section 2 to be -0.6” at the junction. (NOTE: For simplicity’s sake we do not consider the pressure loss incurred by the junction.) These would be the actual pressure losses of the system were they operating independently; however, they do not. They interact at the junction. This means that whenever air flow encounters a junction it will take the path of least resistance and the total pressure losses of each branch of the junction will be the same.For sections that run parallel, always use the section with the higher pressure loss/gain to determine pressure losses/gains through a system. Adjust the branch with the lower pressure loss/gain by increasing the flow rate or decreasing the duct size to increase the pressure loss to that of the higher branch.If the flow rate or the duct size is not changed the air flow through each branch will adjust itself so that each branch has the same total pressure loss/gain. In other words, more air flows through the branch with the lower pressure loss/gain or energy state.In the example below, the actual pressure loss would be somewhere between -0.75” and -0.6”. Section 1 would pull less than 2000 CFM and Section 2 would pull more than 1800 CFM.3.3. System EffectSystem Effect occurs in an air system when two or more elements such as fittings, a hood and a fitting, or a fan and a fitting occur within close proximity to one another. The effect is to increase the energy or pressure in a system as air flows through the elements. To calculate the pressure loss incurred by such a configuration, consider two elements at a time. For example, if two elbows occur 4 feet from one another this configuration will have a pressure loss associated with it.Calculate the pressure loss/gain associated with each fitting as if it occurs alone. Sum these and multiply them by a system effect coefficient (K). The system effect coefficient can be obtained from the ASHRAE Fitting Diagrams for only a limited number of configurations of elements.Configurations not listed must use estimates or best guesses. In many cases, you can use a listed configuration as a guide.One configuration not listed is an elbow within close proximity to the collar of a hood. As a rule of thumb, the chart below can offer some guidance for determining the system effect for this situation. Remember the coefficients in the chart are only an estimate.System Effect TableDistance between Riser and ElbowSystem Effect Coefficient (K)2 feet1.753 feet1.54 feet1.35 feet1.2The diagrams below show system effect factors for straight through elements and turning elements. For rectangular ductwork, D = (2HW)/(H+W). The following formula should be used to calculate the pressure caused by system effect:Pressure Loss = K * (Element A Resistance + Element B Resistance)Straight Through Flow Turning ElementsThe following diagrams show proper and improper methods of constructing ductwork:4. Fan Performance SpecificationA fan performance spec is given as a Fan Total Pressure or a Fan Static Pressure which can handle a certain flow rate. Most manufacturers' performance charts are based on Fan Static Pressure.The diagram below illustrates the difference between exhaust and supply systems.5. Pressure Calculations5.1. MethodologyBreak the system into sections.A new section occurs at:1) Changes in duct size.2) Change in air volumeCalculate losses for each section.Begin at the section farthest from the fan and work towards the fan. For each section:1. Write down or calculate all known variables.Air Flow Rate. (Q)Duct Cross-Sectional Area of the section. (A)Center-Line Length of the section. (L)Air Velocity through the section. (V=Q/A)Velocity Pressure. (Pv = (V/4005)2)2. Write down or calculate all pressure losses in the section.a) List the Component Losses/Gains.Incurred by hoods, ESPs, filters, dampers, etc..b) Calculate the Dynamic Losses/Gains.Occur through elbows, transitions, tees, or any other type of fitting.Use the ASHRAE Fitting Diagrams to find Dynamic Loss Coefficients for fittings.Be sure to factor in System Effect!c) Calculate Frictional Losses/Gains.Use the ASHRAE Friction Chart for “standard” galvanized ductwork.Use the Darcy-Weisbach Equation for “non-standard” duct such as flex duct.3. Sum up the Component, Dynamic, and Frictional Pressure for the section.4. Sum up the pressure losses for all of the sections.5.2. Assumptions and CorrectionsStandard Air Density, .075 lb/cu ft, is used for most HVAC applications.Frictional losses based on galvanized metal duct with 40 joints per 100 ft.Correction for "Non-Standard" Duct MaterialIf material other than galvanized metal is used in parts of the system, you will have to adjust for the difference in the material's roughness factor. This means the Friction Chart typically used to determine frictional losses cannot be used and you must use a variation of the Darcy-Weisbach Equation. See the section titled Equations for more information on this equation.Correction for DensityNot needed if the temperature is between 40 °F to 100 °F and elevations are between 1000 ft to 1000 ft.Correction for MoistureNot needed if air temperature < 100 °F.6. Problem # 1 – An Exhaust SystemThe first step is to break the system into sections.Section 1 runs from the 16’ Hood to the Bullhead Tee.Section 2 runs from the 13’ Hood to the Bullhead Tee.Section 3 runs from the Bullhead Tee to the Exhaust Fan.Now calculate the pressure losses for each section.Section 1Air Flow Rate Q = 4000 cfmCross-Sectional Area A = 10 x 36/144 = 2.5 ft2Center Line Distance L = 2’ + 6’ = 8’Velocity V = 4000/2.5 = 1600 ft/minVelocity Pressure = Pv1 = (V/4005)2 = (1600/4005) 2 = 0.16”Loss CalculationsComponent LossesHood LossPhood1 = -0.688”Look up from manufacturer hood static pressure curves. Here is a link to the Hood Static Pressure Calculator.Frictional LossesUse the Friction Chart to look up the pressure loss per 100 ft of duct.Pfr1 = -(.16”/100 ft) * (8’) = -0.013”Dynamic LossesMitered Elbow. Look up Fitting 3-6 in Appendix 2 - ASHRAE Fittings. The dynamic coefficient C0 = 1.3Pelbow1 = - Pv1 = -(1.3)*(0.16”) = -0.208”Bullhead Tee. Look up coefficient from Appendix 3 - Bullhead Tee Curves.Some general rules for bullhead tees:Since Section 1 has a larger duct size, this section is the “u“ side of the bullhead tee. The following describes how to use the bullhead tee curves to find Ku for the “u“ side of the bullhead tee.Since AU /AD = (10x36)/(20x30) = .6, we find the bullhead tee curves for which AU /AD is .6 and the y-axis represents KU.We know that Qb /QD = 4000/(4000+3400) = .54. For simplicity and ease of graphing, we round .54 to the nearest 10th giving us .5.We also know that Ab /AD = (10x30)/(20x30) = .5.Equipped with these ratios, can draw a line from the point on the x-axis where Qb /QD is .5 up to where it intersects the curve for which Ab /AD is .5We find Ku = 1.6NOTE: Due to human error resulting from manually graphing the value of KU , the number you graph may be slightly different than the value show above. The important thing is to know how to use the curves and get a reasonable value for KU.Now we can calculate the pressure drop contributed by the bullhead tee for Section 1:Pbulltee1 = -Ku * Pv1 = -(1.6)*(0.16”) = -0.256”The total pressure loss for Section 1 is:P t loss 1 = Phood1 + Pfr1 + Pelbow1 + Pbulltee1P t loss 1 = -0.688” -0.013” -0.208” -0.256” = -1.165”Section 2Air Flow Rate Q = 3400 cfmCross-Sectional Area A = 10 x 30/144 = 2.1 ft2Center Line Distance L = 2’ + 7’ = 9’Velocity V = 3400/2.1 = 1619 ft/minVelocity Pressure = Pv2 = (V/4005)2 = (1619/4005) 2 = 0.16”Loss CalculationsComponent LossesHood LossPhood2 = -0.688” Look up from hood static pressure curves.Frictional LossesUse the Friction Chart to look up the pressure loss per 100 ft of duct.Pfr2 = -(.18”/100 ft) * (9’) = -0.016”Dynamic LossesMitered Elbow. Look up Fitting 3-6 in Appendix 2 - ASHRAE Fittings.The dynamic coefficient C0 = 1.3Pelbow2 = - Pv2 = -(1.3)*(0.16”) = -0.208”Bullhead Tee. Using the methodology described for the bullhead tee in Section 1, we can find the value of the coefficient, Kb, for the “b“ side of the bullhead tee. Use the bullhead tee curves for which AU /AD is .6 and the y-axis represents Kb.We find that Kb = 1.75 and the resulting pressure loss is:Pbulltee 2 = -Kb * Pv2 = -(1.75)*(0.16”) = -0.280”The total pressure loss for Section 2 is:P t loss 2 = Phood2 + Pfr2 + Pelbow2 + Pbulltee2P t loss 2 = -0.688” -0.016” -0.208” -0.280” = -1.192”Balance by DesignNote that the pressure loss of Section 2 is greater than the loss of Section 1. To balance the system by design increase the air flow rate in Section 1 to bring it up to the higher pressure loss of Section 2.To correct the air flow rate for Section 1 use the Fan Laws:Q 1 new = Q 1 old * (P t loss 1 new/ P t loss 1 old)1/2Q 1 new = 4000 * (1.192/1.165)1/2 = 4046 cfmSection 3Air Flow Rate Q = 3400 cfm + 4046 cfm = 7446 cfmCross-Sectional Area A = 20 x 30/144 = 4.17 ft2Center Line Distance L = 9’Velocity V = 7446/4.17 = 1785 ft/minVelocity Pressure = Pv3 = (V/4005)2 = (1785/4005) 2 = .20”Loss CalculationsComponent LossesNoneFrictional LossesUse the Friction Chart to look up the pressure loss per 100 ft of duct.Pfr2 = -(.15”/100 ft) * (9’) = -0.014”Dynamic LossesNoneTotal pressure loss for Section 3 is:P t loss 3 = Pfr3P t loss 3 = -0.014”Total Pressure Loss of SystemSince the pressure loss of Section 2 is greater than that of Section 1, it is used to calculate the pressure loss of the entire system as shown below:P t loss = P t loss 2 + P t loss 3 = -1.192” -0.014” = -1.206”7. Problem # 2 – A Change in the System’s Air Flow RateNow we will change the air flow rate through Section 2 from 3400 CFM to 3000 CFM. We will illustrate how once you know one (CFM, S.P.) point of a system you can use the Fan Laws to calculate the pressure loss for other air flow rates.Section 1There is no change. P t loss 1 = -1.165”Section 2Air Flow Rate Q = 3000 CFMCross-Sectional Area A = 10 x 30/144 = 2.1 ft2Center Line Distance L = 2’ + 7’ = 9’Velocity V = 3000/2.1 = 1429 ft/minVelocity Pressure = Pv2 = (V/4005)2 = (1429/4005) 2 = 0.13”Loss CalculationsComponent LossesHood Loss. Use the Fan Laws to calculate a new Hood Loss or look it up in the Hood S.P. chart.Phood2 = -(0.688”)*((3000 CFM)2/(3400 CFM)2)Phood2 = -0.536”Frictional LossesUse the Friction Chart to look up the pressure loss per 100 ft of duct.Pfr2 = -(.15”/100 ft) * (9’) = -0.014”Dynamic LossesMitered Elbow. Look up Fitting 3-6 in Appendix 2 - ASHRAE Fittings.The dynamic coefficient C0 = 1.3Pelbow2 = - Pv2 = -(1.3)*(0.13”) = -0.169”Bullhead Tee. Since Section 2 is the “b” side, we use the set of bullhead tee curves for which AU /AD is .6 and the y-axis represents Kb.We find that Kb = 1.65Pbulltee 2 = -Kb * Pv2 = -(1.65)*(0.13”) = -0.215”Total Section Loss:P t loss 2 = Phood2 + Pfr2 + Pelbow2 + Pbulltee2P t loss 2 = -0.536” -0.014” -0.169” -0.215” = -0.93”Using the Fan Laws to calculate the new total pressure loss for Section 2:P t loss 2 = -(1.192”)*((3000 CFM)2/(3400 CFM)2) = -0.93”Balance by DesignNote that the pressure loss of Section 1 is now greater than the loss of Section 2. To balance the system by design we must increase the air flow rate in Section 2 to bring it up to the higher pressure loss of Section 1.To correct the air flow rate for Section 2 use the Fan Laws:Q 2 new = Q 2 old * (P t loss 2 new/ P t loss 2 old)1/2Q 2 new = 3000 * (1.165/0.93)1/2 = 3357 CFMSection 3Air Flow Rate Q = 3357 CFM + 4000 CFM = 7357 CFMCross-Sectional Area A = 20 x 30/144 = 4.17 ft2Center Line Distance L = 9’Velocity V = 7357/4.17 = 1764 ft/minVelocity Pressure = Pv3 = (V/4005)2 = (1764/4005) 2 = 0.19”Loss CalculationsComponent LossesNoneFrictional LossesUse the Friction Chart to look up the pressure loss per 100 ft of duct.Pfr2 = -(0.14”/100 ft) * (9’) = -0.013”Dynamic LossesNoneUsing the Fan Laws to calculate the new total pressure loss for Section 3:P t loss 3 = -(0.014”)*((7357 cfm)2/(7446 cfm)2) = -0.013”Total System LossCalculated with Tables and ASHRAE ChartsP t loss = P t loss 1 + P t loss 3 = -1.165” -0.013” = -1.178”As shown above, Branch 1 of the junction is used to calculate the system’s total pressure loss because it has the greater pressure drop of the two branches.Calculated with the Fan LawsP t loss = -(1.206”)*((7357 cfm)2/(7446 cfm)2) = -1.178”8. Problem # 3 - A Supply SystemThe first part of the problem will show the pressure gains obtained from measuring the total pressure at 3 points shown in the diagram above. It will provide some rules of thumb for estimating pressure for elbow and at the supply collar. The second part of the problem will calculate the pressure gain of the system and compare it to the measured pressure gain.The entire system satisfies the definition of a section since there are no junctions or duct size changes. The transitions off the supply collars can be included in the section.Supply System - Measured PressureA 0” to 1” Dwyer manometer was used to measure the pressure of the system at 3 points. The pressure was measured for two different flow rates. The results are show in the table below.Measurements Taken at 3 points of the Supply SystemAir Flow Rate (CFM)Velocity (ft/min)Point 1@ collar (in. wg)Point 2after 1st elbow(in. wg)Point 3after 2nd elbow(in. wg)10009350.0750.1400.260192017930.2760.5700.910The table shows:1) How high air velocities greatly increase the pressure. When the air flow rate is raised to 1920 cfm, the velocity through the duct about doubles and the pressure increases 3-1/2 fold.2) The system effect of having 2 elbows close to each other and being close to the hood.Using the pressure gains for 1000 cfm flowing through the system, we see that the pressure gain for the first elbow is: 0.14” - 0.075” = 0.065”. This reflects the system effect of having an elbow close to the supply opening of a hood.The pressure gain for the second elbow is: 0.26” - 0.14” = 0.12”. This reflects the system effect of having two elbows within close proximity to one another and being close to the hood.3) When the system supplies 1000 CFM, the pressure gain at the supply collar is 0.075”. This illustrates how low the pressure really is when a system is designed for the desired velocity between 900 and 1000 ft/min. The table below provides some rules of thumb when estimating pressure gain at the supply collar:Hood Length (L)Pressure Loss EstimateL <= 8’1/16” max.8’ < L <= 12’1/16” to 1/8” max.12 < l <= 16’1/8” to 1/4” max.This table assumes that the system has been designed for velocities around 1000 ft/min.Test Kitchen Supply System - Calculated PressureSection 1Air Flow Rate Q = 1000 cfmCross-Sectional Area A = pr2 = p (7)2/144 = 1.069 ft2Center Line Distance L = 15’Velocity V = 1000/1.069= 935 ft/minVelocity Pressure = Pv1 = (V/4005)2 = (935/4005) 2 = 0.055”Loss CalculationsComponent LossesHood. Assume 1/16” pressure gain at the collar.P hood = 0.063”Frictional LossesUse the Friction Chart to look up the pressure loss per 100 ft of duct.Pfr = -(0.095”/100 ft) * (15’) = 0.014”Dynamic Losses12”x 20” to 10” x 16” Rectangular TransitionUse ASHRAE fitting 5-1 in Appendix 2 – ASHRAE Fittings.To find the dynamic coefficient we calculate:q/2 = tan -1(2”/6”) = 18o Therefore: q = 36 oA0 / A1 = (12x20)/(10x16) = 1.5Therefore C0 = 0.05P trans1 = C0Pv1 = (0.05)(0.055”) = 0.003”10”x 16” Rectangular to 14” Round TransitionUse ASHRAE fitting 4-6 in Appendix 2 – ASHRAE Fittings.B = W/H(A0 / A1 )2 = (16/10)(1.069/1.111)2 = 1.48Re = 8.56DV = (8.56)(14”)(935 ft/min) = 11205Therefore Re x 10-4 = 11. Use the value for Re x 10-4 = 10L/D is not relevant in this case.C0 = 0.11P trans2 = C0Pv1 = (0.11)(0.055”) = 0.006”Two 90oRadius ElbowsUse ASHRAE fitting 3-2. Assume a 3 piece elbow.Assume r = 10”. So, r/D = 10/14 = .71 therefore Kq= 1.C0 = 0.54P elbow1 = C0Pv1 = (0.54)(0.055”) = 0.03”P elbow2 = C0Pv1 = (0.54)(0.055”) = 0.03”We must figure in the system effect incurred by having an elbow close to the supply collar. Use the table in the System Effect section of this paper to estimate the system effect.The elbow is about 2’ above the supply riser to KSE = 1.75.P SE elbow-hood = KSE P elbow1 = (1.75)(0.03”) = 0.053”Now we must factor in the system effect for the 2 elbows in succession. We decide to the S-Shaped fitting in the ASHRAE handbook to estimate the system effect. We use ASHRAE fitting 3-14.q = 90 oL/D = 60”/14” = 4.29K SE = 1.55P SE S-fitting = K SE(P elbow1 + P elbow2) = 1.55(0.03” + 0.03”) = 0.093”Total Section Loss:P t loss 1 = Phood + Pfr + Ptrans1 + Ptrans2 + P SE elbow-hood + P SE S-fittingP t loss 1 = 0.063” + 0.014” + 0.003” + 0.006” + 0.053” + 0.093”P t loss 1 = 0.232”The measured value of 0.26” differs because of error in the system effect estimates.Now we can determine the size fan we need. A 10” blower will handle 1000 cfm at 0.232”.To calculate the Fan Static Pressure:Ps fan= Pt loss- Pv dischargeUse the blower manufacturer product literature to get the dimensions for the blower outlet so the velocity pressure at the fan discharge can be calculated:Pv discharge= (V discharge/4005)2Pv discharge= ((1000/((11.38*13.13)/144))/4005)2Pv discharge= 0.058”Ps fan= 0.232” - 0.058” = 0.174”9. Appendix 1 - EquationsTotal Pressure (PT)PT= P v + PsPv= Velocity PressurePs= Static PressureFan Static Pressure (Ps fan)For Exhaust:Ps fan= Pt lossFor Supply:Ps fan= Pt loss- Pv dischargePs fan= Fan Static PressurePt loss= Dynamic and Friction LossesPv discharge= Velocity Pressure at the Fan DischargeVelocity Pressure (Pv)Pv= r(V/1097)2For standard air P v equals:Pv= (V/4005)2V = Velocity through the duct.Friction Losses (Pfr)Darcy-Weisbach EquationPfr= (f / D) x L x VPThen substitute (f / D) with Hf:Pfr= Hf x L x VPL = Duct Section Length (ft)f = Friction FactorD = Duct Diameter (ft)Hfis defined as:Hf= aVb / QcV = Velocity through the duct cross section.Q = Flow Rate (cfm) through the duct section.See Table titled Surface Roughness Correlation Constants to get values for a,b, and c.Surface Roughness Correlation ConstantsMaterialkAbcAluminum, Black Iron, Stainless Steel0.000150.04250.04650.602Galvanized0.00050.03070.5330.612Flexible Duct0.0030.03110.6040.639k = Roughness factor for the material.10. Appendix 2 – ASHRAE FittingsFitting 3-2Fitting 3-6Fitting 3-14Fitting 4-6Fitting 5-111. Appendix 3 – Bullhead Tee CurvesAu/Ad = 0.2, KbAu/Ad = 0.2, KuAu/Ad = 0.3, KbAu/Ad = 0.3, KuAu/Ad = 0.4, KbAu/Ad = 0.4, KuAu/Ad = 0.5, KbAu/Ad = 0.5, KuAu/Ad = 0.6, KbAu/Ad = 0.6, KuAu/Ad = 0.8, KbAu/Ad = 0.8, KuAu/Ad = 1.0, KbAu/Ad = 1.0, KuAir Flow, Air Systems, Pressure, and Fan Performance

View Our Customer Reviews

After an upgrade that was suggested by CocoDoc, PDFelement stopped working and took me several days and over 20 email exchanges with their customer support in China to fix the problem. it was very frustrating and time wasting for me to get the problem solved. literally the customer support keeps sending a cliche email message stating "Your ticket in the CocoDoc Support Center has been updated" without any fix. Once they sent me an update for IMAC instead of Windows, and later even when I could login to my count it did not work. very poor customer support.

Justin Miller