Add Parallelism: Fill & Download for Free

GET FORM

Download the form

How to Edit The Add Parallelism easily Online

Start on editing, signing and sharing your Add Parallelism online with the help of these easy steps:

  • click the Get Form or Get Form Now button on the current page to make your way to the PDF editor.
  • hold on a second before the Add Parallelism is loaded
  • Use the tools in the top toolbar to edit the file, and the change will be saved automatically
  • Download your modified file.
Get Form

Download the form

A top-rated Tool to Edit and Sign the Add Parallelism

Start editing a Add Parallelism in a second

Get Form

Download the form

A clear direction on editing Add Parallelism Online

It has become really simple nowadays to edit your PDF files online, and CocoDoc is the best web app for you to have some editing to your file and save it. Follow our simple tutorial to start trying!

  • Click the Get Form or Get Form Now button on the current page to start modifying your PDF
  • Add, modify or erase your content using the editing tools on the tool pane above.
  • Affter editing your content, add the date and draw a signature to make a perfect completion.
  • Go over it agian your form before you click the download button

How to add a signature on your Add Parallelism

Though most people are in the habit of signing paper documents with a pen, electronic signatures are becoming more usual, follow these steps to sign documents online for free!

  • Click the Get Form or Get Form Now button to begin editing on Add Parallelism in CocoDoc PDF editor.
  • Click on the Sign icon in the tools pane on the top
  • A box will pop up, click Add new signature button and you'll be given three options—Type, Draw, and Upload. Once you're done, click the Save button.
  • Move and settle the signature inside your PDF file

How to add a textbox on your Add Parallelism

If you have the need to add a text box on your PDF for customizing your special content, follow these steps to complete it.

  • Open the PDF file in CocoDoc PDF editor.
  • Click Text Box on the top toolbar and move your mouse to carry it wherever you want to put it.
  • Fill in the content you need to insert. After you’ve filled in the text, you can utilize the text editing tools to resize, color or bold the text.
  • When you're done, click OK to save it. If you’re not settle for the text, click on the trash can icon to delete it and do over again.

An easy guide to Edit Your Add Parallelism on G Suite

If you are seeking a solution for PDF editing on G suite, CocoDoc PDF editor is a commendable tool that can be used directly from Google Drive to create or edit files.

  • Find CocoDoc PDF editor and establish the add-on for google drive.
  • Right-click on a chosen file in your Google Drive and click Open With.
  • Select CocoDoc PDF on the popup list to open your file with and allow access to your google account for CocoDoc.
  • Make changes to PDF files, adding text, images, editing existing text, annotate with highlight, polish the text up in CocoDoc PDF editor before pushing the Download button.

PDF Editor FAQ

Why did NVIDIA win the GPU market?

nVidia has certainly been winning the desktop and professional GPU market. They are not necessarily winning the portable GPU market. Curiously, and not coincidentally, you can say much the same thing about Intel and their CPUs. And for many of the same reasons.The Birth of the Graphics MarketIf you go back to the 1980s, most systems companies developed their own graphics. The first consumer-level graphics processor of any kind was launched in the fall of 1985 in the Commodore Amiga 1000 personal computer, designed at the Amiga, Inc. startup and by Commodore's staff at Commodore Semiconductor Group, formerly MOS Technology.Most graphics controllers in those days used very simple off-the-shelf graphics controllers, such as the Motorola 6845, with separate memory, monochrome or 4-bit digital color, all very simple. And so it was like much of the PC architecture: another piece that the systems companies made themselves.This got a boost in 1987, when IBM introduced the PS/2 with the VGA card. This was based on an integrated chip designed at IBM, which was much more sophisticated than the sort of controllers you could buy off-the-shelf. This started a scramble by the PC industry to develop their own graphics adapters.There were over a dozen companies making VGA products by the end of the 1980s. Some computer companies, like Compaq, still made their own. Others were either fairly new chip companies, like ATi Technologies, founded to make graphics chips, or existing chip companies like Chips & Technologies, Cirrus Logic, Western Digital, Trident Microsystems, etc. All of these were collectively dubbed "SuperVGA", usually being compatible with IBM's VGA, but offering more options. These were not yet true GPUs, as they did no computaiton of any kind -- they just displayed graphics.The Graphics Processing Unit EvolvesHaving conquered the SuperVGA, some graphics companies worked to figure out "what next". For most companies, that was graphics acceleration... like the Amiga did in 1985. They were adding bit blitters that could move graphics around on a screen much faster than your CPU could do it.One of the more successful companies of this era was S3, founded in 1989. S3 delivered a good graphics chip in 1991 that did acceleration of all SVGA modes. S3 was one of the standard brands at more of the high-end of PC graphics in the 1990s.Another was an existing chip company, Cirrus Logic, who got into the lower-end of the PC graphics market, and was fairly successful with lower-priced hardware that offered decent performance. But Cirrus Logic stumbled in the transition to the PCI bus, never really recovered.Another company that thrived in this era was Matrox, a Canadian company that released their first graphics card in 1978. Matrox gained a reputation in the early 1990s for high quality analog displays for professional graphics use.And other Canadian, ATi Technologies, also managed to survive well in the era of graphic acceleration, releasing a number of accelerated graphics cards in the early 1990s, the Mach series.The Upstarts: nVidia versus 3DfxnVidia was founded in 1993, and in 1995 introduced their first graphics chip, the nVidia NV1, and sold it Diamond Multimedia as the Diamond Edge 3D. They hit the ground running with a 3D product, actually a pretty interesting architecture that did quadratic texture mapping. They had made a strategic partnership with SGS-Thompson for chip fab access. They entered a crowded market with no common 3D standards... until a year later, when Microsoft released the Direct3D APIs.Also in 1996, another startup, 3Dfx -- a company founded by ex-Silicon Graphics engineers -- launched the Voodoo Graphics board. This was an add-in 3D card, which allowed it to be cheaper than nVidia's and many of the other emerging 3D cards, and 3Dfx had included their own software interface, called the Glide API, to make it easier to use Voodoo cards -- but only Voodoo cards. It wasn't designed to be a general industry standard.nVidia already had the NV2 chip in the pipeline -- a similar quadratic rendering chip to the NV1, and while it launched, it got a lukewarm reception. But nVidia made a complete 180 degree turn and essentially started over on the NV3, which came to market as the RIVA (Real-time Interactive Video and Animation accelerator). nVidia made two extremely smart moves with this chip. The first was to use SGRAM, a new memory technology based on desktop DRAM that was dramatically cheaper than the dual port video RAM many graphics chips used, and on a 128-bit memory bus, so for the day it was wicked fast. And then they redesigned their 3D pipeline to support exactly what Microsoft did in Direct3D and, as well, compatibility for the industry standard OpenGL (Open Graphics Library... a standard actually invented at Silicon Graphics).The RIVA 128, and subsequent RIVA chips, also offered good 2D performance, so they offered a big advantage over 3Dfx for anyone starting a new system. These did very well, too... I had one in the day. The RIVA 128 sold one million units in four months, which was pretty amazing in those days. However, the Glide API had grown very popular, and neither nVidia nor ATi (the one company to come out of the 2D era doing good 3D) could always deliver on what 3Dfx did. The Direct3D and OpenGL standards were still buggy.The transition to 3D wasn't pretty. S3 delivered a line of 3D cards called ViRGE, which unlike their 2D offerings were extremely slow at 3D. Later versions proved better, but they never recovered -- the S3 graphics division was sold to VIA Technologies (later sold to HTC Corporation in 2011). Cirrus Logic made a brief showing with a 3D architecture, but eventually left the graphics market to concentrate on audio.Matrox released a 3D add-on for their 2D cards in 1994, but it was focused on 3D CAD, it wasn't suitable for gaming. They did a graphics card in 1996, the Matrox Mystique, but like S3, it couldn't match the performance of the leading card.Real GPUs and a New KingUp to this point, while today we might call them all "GPUs", the graphics cards weren't really graphics processing units in the modern sense. They were accelerators. The first ones could move pixels around on-screen very fast, and some of the bit-blitters could do sophisticated logical operations, but not anything like "computing". The 3D cards accelerated the actual rendering of triangles or polygons, but they weren't doing much in the way of computing.In mid 1999, nVidia released what can arguably be called the first real GPU: the GeForce 256, the successor to the RIVA line. This chip added a T&L (hardware transform and lighting) engine, acceleration for MPEG-2 video, and full compliance with Microsoft's Direct3D 7.0. The pixel transform, clipping, lighting, that was accelerating a whole host of functions that had been run on the CPU, and the result was a 50% speedup over rival hardware from 3Dfx, if it was supported. But as usual, software support in the industry was a problem... not many games supported hardware T&L, and it didn't really start to spread for another generation of cards, at which point ATi had also caught up with their own hardware T&L.nVidia followed up the GeForce 256 with the GeForce 2 and GeForce 2 MX, a "white box" card for PC OEMs to build-in at a lower price. That was a key strategic move: as the demand for better graphics cards was being created, PC makers could get a good GPU from nVidia at a low price. But of course, gamers could upgrade.The other interesting thing about this time, the end of the 1990s, was that the relentless development at nVidia had finally claimed their main rival, 3Dfx. The 3Dfx team just couldn't keep up with the rapid evolution in consumer GPUs that they had in some ways launched, but that nVidia and ATi had mastered. Late in 2000, 3Dfx was being pressed into bankruptcy by creditors, and ultimately sold out to nVidia. Some of the 3Dfx team went to work at nVidia, some went to ATi.Matrox never managed to be successful in 3D, and basically split off the PC mainstream to pursue specialized markets: video, machine vision, etc.And so we entered the 21rst Century with really just two GPU companies left standing, nVidia and ATi.Various PartnershipsOver the years, nVidia has formed partnerships that have proven very useful in changing how they make CPUs and how people use them. They've worked with Universities since the early 2000s, and currently have official relationships with hundreds of universities throughout the world.In 1998, nVidia signed a strategic partnership deal with Taiwan Semiconductor Manufacturing Limited, the largest "pure play" semiconductor company in the world. That became critical as the GPU transitioned from these little graphics chips to a device larger than any CPU. nVidia become large enough to work directly with a company like TSMC on process, giving them much of the power of a fully integrated chip manufacturer, while remaining one of the leaders of the new breed of "FABless" chip companies.The Lessons of ChipzillaJust to pause for a moment and reflect on Intel. Intel won the PC CPU wars in the 1980s, and most of the workstation and server CPU wars in the 1990s. They were fantastically successful, and they taught a bunch of good lessons:Be the best at one thing, and make that thing huge. Intel was best at CPUs, and they made so much money at CPUs, it was really hard for anyone else to directly compete with them on CPU design.When you do other things, they support that one thing you're best at. So sure, Intel makes networking chips, they define bus standards like USB, PCI, and PCI Express. But all of that was designed to make PCs more interesting, to make them better, to sell more CPUs.Software is critical: it sells more CPUs. Every so often, users stopped needing more CPU power. And sure, today, there's only so much the average user needs. But Intel worked to get the CPU doing more -- to make PCs cheaper by eliminating specialized hardware for MPEG, for audio, to prevent any digital signal processor from grabbing hold.Keep old markets -- don't let anyone else get a foot in the door of your market, if you can help it. If someone's trying to add a new feature to your PC, some new hardware, find a way to do it with the CPU, even if you have to add new instructions (Crypto, for example).Find new markets -- keep expanding. Intel originally made the 8086 and the 8088. These days, they have processors for high end multi-chip servers. They have massively parallel computing engines. They processors for desktops, workstations, laptops, tablets, even phones."Only the paranoid survive".. that was Andy Grove's motto. It's a good one. If you're always worried about the other guy, about being caught, about falling behind, you're probably ok. When you get complacent, you fail.These things all feed each other, too. Making a processor for a server lets Intel make a $1000+ processor today, but learn things that'll fit in a $50 processor in a few years. Making that $50 processor faster lets them put more on the CPU, less to spend on specialized hardware. Some server lessons were also applied to make very low power CPUs, which is now a huge part of a market Intel once didn't even play in. Lots of companies thought they had better ideas on how to organize, and many of them are gone. nVidia looked at the Intel formula and followed it pretty closely. That's one of the main reasons they've done well.Be The Best at One ThingnVidia is all-in on the Graphics Processing Unit. True, today, it's not just for graphics anymore, but they're still making GPUs. They have done some tablet/phone chips with ARM processors, but a big part of that was to ensure they had GPU presence in emerging markets like mobile computing.There were long rumors that nVidia, like AMD, was going to make an x86 clone. They never did, and that was probably a smart move. That allowed Intel to work very closely with nVidia, establishing the Intel + nVidia system as the leading combination for gaming and workstation performance. Certainly AMD GPUs work just dandy in Intel systems, but AMD, being a competitor, doesn't get the cooperative work with Intel. Together they've worked out neat tricks, like on my laptop -- there's an on-chip Intel GPU for light graphics, but when any heavy lifting is needed, the nVidia seamlessly kicks in.Software Is CriticalIf you back to the early days of GPUs, when those various companies started making chips that went way beyond VGA, software was sometimes a mess. It was the Wild West in many ways, and this became pretty frustrating for users.As described, nVidia had problems with jumping too far ahead with features that wouldn't be used by software, by bugging performance with games through complex APIs, etc. Even as some of the competition vanished, gaming was clearly the technology driver at nVidia going into the 2000s. Here's what John Carmac of ID software had to say:We've had closer relationships with Nvidia over the years, and my systems have had Nvidia cards in them for generations. We have more personal ties with Nvidia ... our games do get more hands-on polish time on the Nvidia side of things.Nvidia does have a stronger dev-relations team. I can always drop an email for an obscure question. So its more of a socio-cultural decision there rather than a raw “Which hardware is better.” Although that does feed back into it, when you've got the dev-relation team that is deeply intertwined with the development studio. That tends to make your hardware, in some cases, come out better than what it truly is, because it's got more of the software side behind it.So nVidia got very serious about relations with software developers, so that the developers get their games working reliably, nVidia gets in there quickly to fix bugs, etc. nVidia was also the first GPU company to put up a "betas" site for advanced users. And they had forums with gamers discussing settings, tweaks, details. In short, all aspects of software support make for a better overall product reception. And especially for a part that some people buy new every year or two, that all-important repeat business.Keep Old MarketsnVidia is GPU. GPU is gaming. Thus, nVidia is gaming. We've seen how that's stressed, in making sure that PC vendors can build-in nVidia GPU cards. That gaming companies get their stuff working very well on nVidia hardware. That drivers are updated, that gamers are treated first-class.Gaming drove nVida to become the leading 3D graphics card company. Sure, you could argue that ATi -- now part of AMD -- is doing ok as well. But with gaming driving the GPU, it's clearly that nVidia is doing better than ever, as shown in the above chart.This one ends at 2Q2015, though I believe that nVidia's actually over 80% in the stand alone GPU market right now.Find New MarketsWhen you visit the nVidia main US page today (Visual Computing), the first area that pops up is gaming. But there are a bunch of others... and that "Visual Computing" that came up. GPUs accelerating games IS visual computing.... but they've figured out how to apply it other areas.The first one of these was professional graphics. Gamers push the envelope. They're less demanding about visual quality, more demanding about ultimate performance, than CAD users. But with the rise of 3D CAD in mechanical design, electronics design, animation, etc. nVidia isolated a new market that was not fully served with gaming video cards. Certainly other companies pursued professional graphics as well, but not a company selling millions of GPUs per year.The first nVidia Quadro CPU card used the same NV10 GPU as used on the GeForce 256. And it's actually that way all up -- the professional GPU cards are introduced with an older version of an existing gaming chip. They usually come with more on-board memory, and fully mature drivers that don't have frequent updates. That was actually nVidia's innovation here: using an older chip and fully proven OpenGL drivers proven in gaming, they were able to deliver very stable OpenGL for professional 3D work.For while, nVidia also made multi-display Quadro GPUs for business. That's mostly been replaced by commodity GPUs, which routinely support 2-3 displays these days.In the early 2000s, researchers in various universities started experimenting with writing complex math problems as GPU problems, and then using GPUs like nVidia's to solve the problem hundreds of times faster than a CPU could do it. This gradually grew into 3rd party full APIs that could ease the translation between math and graphics problem.In 2006, nVidia released the first CUDA (Compute Unified Device Architecture) APIs. This was a set of interfaces that allowed programmers to use nVidia-only processors in a GPU-independent way. Programmers could more easily write programs in C/C++ or FORTRAN and really tap the power of the GPU for GPGPU (General Purpose GPU) computing, a whole new way to use the GPU. This had applications in video processing, photo processing, CAD, etc... but it's found it's way to supercomputing as well. In fact, in 2010, the world's fastest supercomputer (Tianhe-1A, China) was a hybrid mix of CPU and nVidia GPU.Unlike the professional GPUs, nVidia worked on specialized architectures for their "Compute" processing boards. But these are still GPUs, they're just motivated to build a more flexible architecture and prove it in the computing market, then move that to traditional graphics processors. And this is still happening. nVidia's very latest P100 GPU -- well, actually, 4500 of them -- are currently moving into a supercomputer at the Swiss National Supercomputing Center in Switzerland, used to analyze data from the Large Hadron Collider at CERN. This is one of the largest chips ever made, at 150 billion transistors, employing nVidia's latest Pascal GPU architecture.And yes, there are other GPCPU languages right now, particuarly OpenCL (Open Computer Language), which was started at Apple and spun off to the Khronos Group. OpenCL works on AMD and nVidia GPUs as well as the Intel PHI products. But nVidia got a huge boost in this field through first-move status. ATi was getting serious about GPGPU computing, at first using their own proprietary Streams API, when the AMD buyout came along and tossed all the cards up in the air for a little while. Lots of early GPGPU applications were written for CUDA, and so their users preferred nVidia hardware.In ConclusionAs Intel has shown, success breeds success. The GPU market is closer than the CPU has been recently, and certainly AMD had managed to keep up with nVidia on many fronts, which is probably why they're still around, so many others having failed or switched to a different product line.It's a combination of marketing, good product engineering, good support, aggressiveness in all of these things, and especially, developing a good reputation among the users. Some of it's also just timing your mistakes right... you can trace back the circumstances that doomed most of the competition, and it can come down to a bad product philosophy or just one trip at just the wrong time, something you never recover from. nVidia's diversification into different markets, but keeping the same product at its core, is also a huge advantage, again, right out of the Intel playbook.

Why are there so few microprocessor manufacturers?

The modern microprocessor is one of the most complex things ever created by humans. Making a new CPU is a big deal, and one fraught with technical challenges, business barriers, and intellectual property minefields.CPUs have tremendous network effect. A CPU is most valuable when a large group of people develop software for it. Merely developing and producing a CPU is not only hard and expensive in its own right, but developing an ecosystem of programmers and systems that use it multiplies the problem by orders of magnitude. The fastest way to make a new CPU would be to add compatibility to older CPUs (which Transmeta tried with the Transmeta Crusoe), but even in that case the established Intel CPU consumer ecosystem had too many barriers to entry to support a new CPU.In this modern era, we face a CPU oligopoly, with Intel x86 (bolstered by Windows dominance), ARM (bolstered by embedded systems, tablets and phones) and POWER (bolstered by the server market), with a smattering of Sparc and Alpha. The markets are pretty well covered, and each vendor fiercely defending their area. ARM is trying to move upmarket by working with Microsoft. Intel is trying to move downmarket with cheaper, smaller CPUs. It’s hard to justify investing in a new player to jump in this fiercely competitive market which is driving prices down for a miraculous piece of hardware. Most of the expansion of computing power in novel, parallel computing “co-processors” like GPUs and FPGAs.The situation is not hopeless. China is building new RISC CPUs for domestic consumption that look pretty promising. In China, strategic companies may enjoy subsidies and shielding from competition and investor pressure that companies in other countries may not. For example, the Sunway CPU now powers the fastest supercomputers on the planet. In that case, the Chinese Military is subsidizing development and acting as a captive market. This servers a national interest that China become independent of other countries for computing technology. It also helps them prevent the incursion of “trojan horses” or “kill switches” in imported hardware.Such issues pose no concern for most countries, so they remain happy to use existing CPUs on the market rather than incur the expense and complexity of developing a new ones.As Quantum Computing becomes more real, expect to see new CPU vendors appear. Though it is not clear if that would be a “CPU” per se, or just another GPU-like coprocessor in a modern computing system.

Why are GPUs so much larger than CPUs?

As others have pointed out, this isn't universally true. However, it is a general trend. For instance, Intel's Skylake devotes a much larger part of its area to the GPU:The reason has to do with the type of workload each type of processor is targeted at.CPU's:You can think of CPU's as the gate-keeper of all (software) compute work in a device. The Operating System (OS) code pointer (location in RAM where the OS is at) is loaded into the CPU at boot and from there, the OS code is processed by the CPU instruction by instruction* until it reaches some code that sets up a command for other processors -- e.g. the GPU.Unless specifically told otherwise, all code is executed on the CPU. This means the CPU needs to be a jack-of-all-trades type of processor. It needs to handle OS routines like allocating memory, copying memory, doing arithmetic, sending commands to IO devices and handling user input equally well. It also needs to be reasonably fast at application routines that don't easily fit in the execution scheme of any of the other processors -- e.g. a custom photo filter, video codec or text processing algorithm.For something like graphics workloads or very-very-parallel and very-very throughput-oriented compute tasks (not many if-then control structures, code-heavy routines, etc.), it makes sense to carve out a pocket of memory, load the task's work data into it, then send the location to the GPU and tell it "run this set of instructions on this part of memory".But often -- and this is becoming more and more true in mobile devices -- programs (that won't be offloaded to a dedicated processor) contain little in the way of gigantic blocks of computation and more and more code-heavy, run-a-lot-of-different-instructions type of workloads. E.g. a mobile browser.To speed up such workloads, the CPU has to be able to do many different types of instruction routines well, including, but not limited to:1. Jumping around from one set of instructions to the next without having to wait to get instructions from RAM (which results in the CPU sitting idle for ~100+ cycles).2. Identify arithmetic instructions that can be processed in parallel (i.e. one doesn't depend on the result of another) and process them in parallel.3. Spend very little time on overhead from various layers of abstraction the OS adds. This includes converting Virtual address space to Physical address, being able to switch between multiple running programs without much loss in time and handling multiple threads accessing the same data.4. Avoid waiting on RAM for data it needs to process.5. Check the routines it runs to make sure it doesn't do anything illegal (e.g. divide by zero) and jump (quickly) to specialized routines that handle such cases. Then resume processing of the original routine.6. Handle requests from other devices (touchscreen, network, etc.) at any given moment, then resume running the original program without much loss of efficiency.Improvements to gain some or all of the above would require things like:1. Larger caches (to avoid going to memory), but that has diminishing returns. At certain cache sizes, adding more doesn't result in as large of a performance improvement as most of the data/code will fit into the cache.2. Processing more instructions in parallel. But that also has limitations as most software routines are written with sequential routines -- e.g. each instruction follows another and some depends on the result of an instruction before it.3. Pipelining (https://en.wikipedia.org/wiki/Pipeline_(computing)) the processor in order to gain a frequency increase. But this has diminishing returns as there are penalties (https://en.wikipedia.org/wiki/Branch_misprediction) to this approach.A general trend is that for the types of workloads that CPU's work on, there isn't any one thing that can be scaled indefinitely in order to increase performance. Increasing any single feature will bring about performance increases up to a certain point. This naturally limits the number of transistors a CPU can use; at some point, throwing more cache or instruction pipelines really won't produce a worthwhile gain.* In modern CPU's, they aren't exactly processed one after another. Multiple instructions can be processed in parallel when possible.GPU'sGPU's work on a very specific -- though it's getting broader -- set of tasks. Specifically, because of their history as graphics processors, they work on tasks that have a lot (an almost infinite amount) of computation that aren't inter-dependent. If you throw the same workload at 2 GPU's (and 2 memory controllers and 2 sets of DRAM), they can perfectly divide the workload between themselves and the result would be a speed increase of 2x.This ability to scale seemingly to infinity means that the easiest and most straight-forward way to increase GPU performance is to increase the number of processing pipelines. There's basically no logical bound on this -- though there is obviously a physical bound in terms of number of transistors available, memory bandwidth and power consumption. This also means that if a chip manufacturer wants to increase their GPU's speed, they'll figure out a way to either provide more power to the chip or to design more power and area efficient processing elements, and then throw more onto the chip in parallel.Moreover, compute tasks on a typical modern device often requires a disproportionate amount of compute power for graphical purposes rather than code-heavy routines. Think of a 4k display and the number of pixels that need to be pushed for every pretty effect at a perfect 60fps without any dropped frames. Or games with any number of graphical effects that will need to be processed in parallel.These two factors -- unbounded ability to scale processing power with area and high demand for processing power -- naturally leads to GPU's being very very large.

Comments from Our Customers

Well cool. Intuitive and dead easy to learn in seconds

Justin Miller