Class Ii Special Controls Guidance: Fill & Download for Free

GET FORM

Download the form

How to Edit The Class Ii Special Controls Guidance easily Online

Start on editing, signing and sharing your Class Ii Special Controls Guidance online under the guide of these easy steps:

  • Push the Get Form or Get Form Now button on the current page to access the PDF editor.
  • Wait for a moment before the Class Ii Special Controls Guidance is loaded
  • Use the tools in the top toolbar to edit the file, and the change will be saved automatically
  • Download your completed file.
Get Form

Download the form

The best-rated Tool to Edit and Sign the Class Ii Special Controls Guidance

Start editing a Class Ii Special Controls Guidance in a minute

Get Form

Download the form

A quick direction on editing Class Ii Special Controls Guidance Online

It has become really easy these days to edit your PDF files online, and CocoDoc is the best solution you would like to use to make some changes to your file and save it. Follow our simple tutorial to start!

  • Click the Get Form or Get Form Now button on the current page to start modifying your PDF
  • Add, change or delete your content using the editing tools on the tool pane on the top.
  • Affter altering your content, add the date and add a signature to make a perfect completion.
  • Go over it agian your form before you save and download it

How to add a signature on your Class Ii Special Controls Guidance

Though most people are adapted to signing paper documents with a pen, electronic signatures are becoming more common, follow these steps to add a signature!

  • Click the Get Form or Get Form Now button to begin editing on Class Ii Special Controls Guidance in CocoDoc PDF editor.
  • Click on the Sign tool in the tools pane on the top
  • A window will pop up, click Add new signature button and you'll have three choices—Type, Draw, and Upload. Once you're done, click the Save button.
  • Drag, resize and settle the signature inside your PDF file

How to add a textbox on your Class Ii Special Controls Guidance

If you have the need to add a text box on your PDF for customizing your special content, take a few easy steps to complete it.

  • Open the PDF file in CocoDoc PDF editor.
  • Click Text Box on the top toolbar and move your mouse to position it wherever you want to put it.
  • Write in the text you need to insert. After you’ve writed down the text, you can take full use of the text editing tools to resize, color or bold the text.
  • When you're done, click OK to save it. If you’re not happy with the text, click on the trash can icon to delete it and do over again.

A quick guide to Edit Your Class Ii Special Controls Guidance on G Suite

If you are looking about for a solution for PDF editing on G suite, CocoDoc PDF editor is a commendable tool that can be used directly from Google Drive to create or edit files.

  • Find CocoDoc PDF editor and establish the add-on for google drive.
  • Right-click on a PDF document in your Google Drive and click Open With.
  • Select CocoDoc PDF on the popup list to open your file with and allow access to your google account for CocoDoc.
  • Modify PDF documents, adding text, images, editing existing text, mark with highlight, trim up the text in CocoDoc PDF editor before saving and downloading it.

PDF Editor FAQ

Are there any companies that are likely to "tax" the VR/AR market? What supply chain dependencies exist, if any?

I start with an attempt to define and restrict the scope of terms such as AR, VR, tax that are nebulous and context dependent. Then I examine the current state from a historical and market perspective. Then I consider the actual h/w, identify stable and unstable components, followed by a look at supply chains & opportunities. I end with a few discussions on physiological issues, GPUs and captive content distribution channels.I discount content developer opinions and selective pre-order volume based metrics here - Their (A) enthusiasm is driven by assumptions of first-mover advantages on immature platforms, & (B) are not locked-in to platforms & will switch to largest player when that makes sense.Definitions and restrictionstax - A company X taxes a market Y, when X maintains the most immediately useful tool-chain combinations of IP/Services/products such that X may assume a default position in any product pipeline in Y.Example: X=AWS (from Vaibhav Mallaya's comment) for Y=cloud,Example: X=Texas Instruments for the Y=home-projection market.Note:For such taxes to exist Y needs to be mature.Taxation is never only a matter of technology superiority but requires perception engineering and X's active participation.Companies need to react in real time to developments in YExample: TI's tactics in creating a DLP monopoly (InFocus case)Example: Sharp/(MITI)'s decline+failure to contain Taiwan LCDNote: AR is not equivalent to VRVR can use non-transparent display substrates, AR may not.VR needs far greater calibration between the human visual system and the display, AR does not. Consequently VR needs more computing than AR does.VR hardware is easier to produce but harder to manage, over AR.AR may be made sleeker than VR.AR is mobile, VR may be static.AR is a generic term - can mean vision, audio or haptic, but popular press prefers to relate it to vision. VR is primarily based on the visual system.This distinction should make it clear that these are two different types of supply chains. VR due to its 'welder's mask' design approach doesn't need to be too sleek, consequently it is easier to manipulate existing component lines to work with VR. AR relies more on sleekness and wearability. So AR needs highly custom compactifaction - Something that no one has achieved yet, but I know all are working on this. Apparently there's also a new category that converges these two segments called Mixed reality.Another point requiring specificity - I imply head mounted or near eye display based interactive technologies when I talk of AR/VR devices. The device has to have features of wearables and has to attempt interaction with the environment without specific or manual input. Apple Watch/Fitbit/Kinect/Immersive earphones/that multi-speaker set up in your garage are not AR devices. A multi-projector PicoWall type immersive display does not qualify here either.An opinion on the origins of the current AR/VR cycle2007-2009 iPhone. The adoption numbers shocked all big players not part of the Apple supply chain. Specifically HP/Intel/Google/Microsoft. They get hammered by investors. Pressure on them to manage perceptions and since they were losing the mobile war, they had to create new product segments. The pressure increased exponentially around 2012 when mobility products start eating away what was considered their safe, stable product lines - desktop and enterprise.The formative years for AR/VR were 2009-2012 when everyone in the industry started taking on risks, helped by a stabilizing economy and consistent/low rates. That's when we started seeing the zero-th generation of AR/VR noise - 2009 Microsoft Kinect, 2010 Intel IoT alliance, 2011 Google glass, 2012 Google Tango/Oculus/LeapMotion etc.Most of the investments appear to have been based on 'not getting left behind, again' fears. For example, its safe to say in 2016 that Kinect/LeapMotion is no longer considered a unique, or relevant entertainment experience. Glass was abandoned because Google did not see it becoming a consumer product. Oculus was acquired in a Pay-to-Play move by a social media company. etc.[I have ignored the relatively unknown, but interesting companies that existed for brief moments in this space. There are many.]I can only conclude that the industry is force-driving the AR/VR wagon, not market demand. The risk we run is in killing the niche fields where AR/VR truly are utilitarian. It's certainly possible that we are mistaking curiosity for intentions. So 2016 is going to be the bellwether year for 1st gen products. This is the year we get to determine what the markets truly care for and wants as features.Y=AR/VRNext we draw some boundaries around the AR/VR market:There is no market, no real numbers only WAG projections. No perception of utility in consumers. It's not even certain that near-eye systems will ever steer outside enterprise or niche applications. Gaming does not provide volumes if it costs far greater than ~300 USD (a reasonable, consumer-accepted baseline). We may be talking of VR/AR-arcades, but not much of a market.Market will coalesce around the niche 2016-early-adopters seem to be the most excited about. Most seem to focus on what it does for the enterprise.From the perspective of supply chains, this takes time. After initial low rate prototyping, Silicon/mechanical/electrical/integration/testing/shipping takes 2+ years before economies of scale, design version stability and reliable demand projections kick in. So supply chain coalescence may be expected about 2018/2019.Companies that tax, X won't show up for 4/5 years from now. 2020 being the earliest. There will be a lot of noise in between.Always black swans like Magic Leap. They have made the most respectable moves in the market along with Microsoft so far - Respectablilty based on engineering planning and strategic foresight of their moves.So my predictions have a 2020 horizon.Hardware subsystems and units in the chainLet's now consider the typical systems such devices need. An incomplete list follows. Items in the list may be lumped together into custom subsystems (for example, the S1 PCB in the Watch, which Apple marketing described as a 'chip' initially and now call 'SiP').Generic systemsPower/PMICsPower/BatteryPower/AmplifierEnvironment/Color/light/Humidity/Microphone arrayEnvironment capture/CameraMechanical/ChassisMechanical/InputElectrical/touchElectrical/InterconnectsMechanical/Flex boardElectrical/MainboardElectrical/Antenna and flexLocation/GPSTelecom/baseband processorsWirelessLAN/Wifi/PAN/BT/WLAN/PAN/non-standard radio/opticalMemory/NAND flashMemory/DRAMProcessor/SoCProcessor/Interface controllerProcessor/Video/graphicsProcessor/ASICs for custom subsystems, may be integrated into main SoCOrientation/Inertial/MagneticComponent level manufacturing/TestingSubsystem assemblySubsystem packagingDevice level packagingDevice level assemblyCleaning/testingQC metrologyPackagingDisplayOptical/Display optics - Always customOptical/Digital light engine/DLP/LCoS/uOLED/LBS/LCDOptical/ASIC or COTS drivers specific to digital light engineInputInput/Gesture/RGBD cameras/Shaped light TX+RX pairs/Input/Gesture/HF acousticsInput/Gesture/ASIC ProcessorInput/Gesture/Touch-free capacitive/EfieldInput/Gesture/Capacitive touchInput/Gesture/x-IMU or air mouse/pointer type devicesInput/AudioRX/Speech/MEMS Microphone/ArrayOutput/Haptic vibe/Bone conduction/Custom hapticsOptical/Gaze/IR illuminated/High frame rateOptical/Gaze tracking ASIC/processorAudioRX/Environment/MEMS/PiezoAudioRX/Environment/Directional collocationAudioTX/mech/Bone conductionAudioTX/acousticAudioTX/multi-channel acoustic pipes/immersiveFirmware (Some require RTOS eg. baseband or dedicated processors such as IMUs with some predictive filtering)Firmware - Algorithms/Sensor fusion subsystem/Maybe in ASIC or analog or digitalRecognizing that everything can be customized, which adds time + money + uncertainty, customization is typically minimized. Most of the items that appear on the list can be permuted/combined into groups that make sense only to the systems groups/designers.Items, like cleaning/testing/assembly may occur as needed and do add cost and time. Once you know how many units you think you are going to move, you can add automation, assembly robots, additional metrology, rent/acquire facilities, increase staff as required. But designing/programming/testing/ensuring regulatory compliance all take time and need to be factored. ODMs typically handle some aspects of this.Looking closely at the factors that shape user experienceIt may be noted that most of the items that appear in the Generic subsystems segment on the list are commodity and there is considerable experience in manufacturing such highly integrated systems and have stable supply chains. No company will create new 'taxes' based on those components because of pre-existing entrenchment and options.The latter systems, that focus on display (immersion) and natural interface experiences (interaction) are the difficult problems in AR/VR. So if anyone wants to levy a tax on the VR/AR market they need to focus on hardware or software IP for:Immersion,Interaction.From the VR supply chain perspective, some processing/tasks/interfacing may be handed over to tethered desktop systems (example: Oculus). A key issue is will the market like tethered wearable experiences? [I don't think so - the markets vote towards mobility in wearables. But there may be differing opinions on this. Possible that content becomes so pretty that people don't notice a tethered experience.] If tethered systems become the primary model, then entrenched performance computing players may maintain their lead.IMMERSIONThe immersion stack has static optics (lenses, assemblies, light pathways, filters/Anti-reflection/diffuse/specular coatings, waveguides, microlens arrays, optical beamformers, diffraction optics), a digitally controllable light engine and a processor that typically takes the RLE compressed image data from the graphics RAM and figures out how to switch the pixel states. Then there are display standards and algorithms and interfacing IP opportunities.Static optics - These are typically polymer, but may be glass. Generally molded. Sometimes machined. Requires high QA effort, but once baselines are met, results are consistent and easy to scale. Unless there's a unique lens manufacturing tech or coating materials science that optimizes volumes and costs with specific apps to VR/AR, you are unlikely to gain any traction trying to corner a market here. This is a high volumes, low margins business.Light engine - Some examples in AR are DLP (TI), LCoS (Holoeye/Himax), micro-OLED (e-magin), microemissive (LuxView, my employer), LBS (apparently Magic Leap, Microvision). You will find details on the technologies here.There are regular LCD/OLED display components that are used in VR (Oculus and maybe Meta), but they won't shrink and that's a problem.Each specific light engine technology has a unique approach to decoding and storing the video stream and updating the pixel state. Unless the light engine is a standard product like an LCD variant, the chosen display technology will force a buy-in into a whole ecosystem that includes processors and firmware and IP licensing.These typically constrain the form-factors of the final product design, so are one of the biggest decisions a designer has to take. The light engines/image quality are also primary factors that shape user experience.You may be aware that the displays are singularly the most expensive subsystem even in the highly commoditized smartphone market. So this is where the biggest opportunities are. These companies (and their partner orgs that handle integration) are critical to supply chains since they may offer fully-integrated, 'CAD-and-drop', custom designs. Once a market gets hooked to a certain device experience, they won't care for anything else. First STN, then IPS LCDs killing the possibly superior plasma tech or power efficient electrowetting/e-ink/Mirasol is an example.[Edit: A technical point on illumination - VR h/w are typically not contrast or brightness restricted since the display doesn't compete with ambient illumination. But AR (or mixed reality) devices need to be considerably brighter than the light admitted by the see-through optical visor. Since the visor is typically designed to reflect light into the user's eyes, it adds some inefficiencies as well. So the light engines and illumination sources (see below) typically need both high conversion efficiency along with native high lumen output. This is another point that makes the choice of light engine tech very critical to AR experiences.]Illumination sources - The light engines may also require independent illumination sources, color sequencing technology and semiconductor photonics/lighting (laser diodes, LEDs). There are entrenched players here and this club typically requires deep efforts and commitment. If you have new and unique IP addressing illumination, you will be immensely valuable.Display Processor - See light engine. If you have an ASIC team and contracts/IP licensing with GlobalFoundries/TSMC, you are valuable.Algorithms - Discussed later.The opportunities are big but an org needs complete control of all verticals here. You cannot source processed photonics from somewhere, optics from elsewhere and use COTS processors and still consider yourself valuable. There's a reason Magic Leap needs the funding it did. (The jury is still out on if that's enough). The more unique your stack, the more verticals you need to control.Let's look at examples - We know that Oculus and Meta are based on LCD technology. Companies like CastAR appear to be using DLP. I have heard conflicting opinions on the HoloLens tech stack - some say DLP, based on maturity and display contrast, others say not-DLP because DLP is power hungry [Edit: Apparently LCoS - still power hungry and inefficient illumination]. Microsoft's patents apparently indicate a homebrew, with ODG+Nokia Lensing IP handling manufacture.TI has had the DLP market cornered for a long time. Japanese/Korean/Taiwanese companies have LCD cornered. LCoS is still too spread out, there are possible consolidation targets there (specially after Google dropped Glass). LBS is risky, because human factors. Micro-OLED is not bright enough. Samsung has regular OLED cornered, but like LCD, OLED doesn't shrink magically. I won't comment on Emissive micro-displays. Micro-LCD is being worked out but Japan/Taiwan/Koreans can't be beat; only acquired - And they consolidate/disperse all the time.2. INTERACTION.The interaction stack is built on permutations/combinations of sensors and algorithms. The primary intent with AR is to allow the displayed digital content to interact with and react to a user's environment. This is not as critical for VR, but you still need to detect the user's posture to enable immersion and reactive display content (what Oculus refers to as 'motion to photon').Sensor hardware - Publicly documented hardware may include simple RGB cameras, depth-sensing RGB+D cameras (PrimeSense, RealSense, Kinect), MEMS ultrasonic array emitters/receivers (Przybyla's Chirp), RF TX/RX pairs (Project Soli), mini-IR/laser/Time-of-Flight sensors (Microsoft), MEMS acoustic microphone arrays (Akustica, Bosch), capacitive field sensors (Microchip), MEMS MARG (magnetic angular rate gravity, from AD, TI, InvenSense, Freescale...), pressure, altitude, sensors as arrays among many other types. These sensors enable the system to understand the environment and user intent.One key insight that I think is unique - You can pay for high-accuracy sensors but write crap algorithms, or you can pay for low accuracy sensors and write excellent algorithms to get similar results. Leap motion is an example of the former, and the Apple IMU team is an example of the latter. Google's acquisition of Lumedyne shows which way the industry is headed.Scoping the product experience is also very critical. Just because you can have 20 sensors on your device doesn't translate to a better user experience. Multi-sensor algorithms are sophisticated and still need much development. Many orgs like AD, Freescale, Bosch finally, after like 15 years in the wild, have started to include quaternion f/w with their IMU chips - but too late - they don't work on arrays (because uncertainty propagation, parasitic calib. errors).Algorithms - The algorithms themselves are a big opportunity. The image processing required from depth sensing cameras is still considered a heavy load and requires specific co-processors and optimized architectures. Two core problems - 1. Environment mapping/segmentation, 2. Gesture recognition.The algorithms+sensors are still not accurate enough. Since these sensors drive the display content, there is little scope in ever processing this data online (because latency) like how we deal with speech. The old IP that has been granted is fairly useless, though legally still menacing. I expect a new wave in fusion, SLAM and sensor hardware quality to negate this barrier.There needs to be more psycho-optics studies on how these displays and algorithms interact with the human visual system and how they affect us physiologically over repeated and/or prolonged exposures. I am not certain if everyone will perceive the same displays similarly or if individual variations in eyesight will ruin experiences for some. Example: How does dominant eye affect near-eye display perception? There is opportunity here for ophthalmology to drive a part of the discussion on algorithms design.You will also find a lot more snake-oil here than anywhere else. Academic research is very difficult to translate to production devices. The simulated CGI experiences that marketing shows us are not what people see in the wild (Kinect/ LeapMotion). That's one big perception hurdle to cross.Algorithm co-processors - The algorithms may be implemented in silicon + PROM (Microchip/MGC, Maxim/21100) or they may be licensed as firmware (Bosch). Such implementation opportunities may be converted into a segmented toolchain that may be licensed by independent or smaller vendors. In a way this would be similar to the fabless/foundry IP license and production models.Metrology and calibration - This is the equivalent of the hard/soft iron and lifetime calibration issues in IMUs. Error rates and reliability are very, very critical here since they ruin usability and experience. There are studies that pretty much predict why technological leaps such as the Glass fail in the real world (Dix et al, Human computer interaction 2005). Most of my hesitation in being enthusiastic about this segment comes from this UX perspective.So the company that makes a product that implements a few features right 100% of the time may actually take the spotlight away from something like HoloLens which implements lots of features that fail now and then. Graceful failures don't mean very much. Reliable failures do, but someone has to figure out how to make this work.[My personal opinion is that vision based approaches for gestures will never achieve the reliability that is required for high volumes adoption in the time-window it is required in. They are simply trying to solve the wrong problem. But I am biased.]Ecosystem opportunities - A big opportunity from a devices perspective is inaccessible to most small players - An ecosystem to use the AR/VR product inside. Only Google and Microsoft have this. And so would Apple if it decided to release something - which it still may.So that was the background. Let's answer the questions based on that.1. Companies likely to tax the AR/VR market.The usual light engine suspects - TI, if we go the DMD/DLP way, or the Korea/Japan/Taiwan LCD consortiums if we use LCDs or microLCDs. If LCoS makes an entry, then you may have Himax. Whoever controls GaN/GaAs wafer bonding and 3D stacking technology also has a play here. Samsung, with all its subsidiaries and manufacturing partners will definitely be taxing everyone. Sony/Sharp's fortunes may reverse as well (I highly doubt it will be significant for a full recovery). Just about every pico projector or small form factor display specialist company may have a play here.A note on LCoS is that it has some unique wavelength selectivity and phase modulation characteristics. It will find unique volume applications for sure, but it has to mature (for example, ferroelectric LCoS) in time for it to be viable for AR/VR.Then there are the sensor/A2Ds/ADCs semiconductor manufacturers at play - All the MEMS and optical sensors companies. Among them, the one with the best accuracy and precision may win. Google has a march over other companies with the Lumedyne acquisition, but not sure how mature their lines are.Semiconductor photonics companies like Philips or Maruwa or Osram or Sumitomo are going to benefit if they just try and play nice with the AR/VR companies instead of focusing on their energy efficiency marketing.Metrology software companies that provide enterprise and production support with standardized services for sensor algorithms and metrology and/or device calibration would be successful. Bosch is an example. There's several companies in the Illinois area as well. Prototyping silicon has reliability issues that may be solved with standardized outsourced testing services.There are opportunities for SFF computing and GPU companies to release plug-n-play modular hardware that ensure that consumer grade laptops/desktops are compatible with VR/AR systems. USB 3.x standards appear to be a great way to enable such technology. It is likely that there will be consortium based standards created to enable such channels, and such standards will be used to levy a tax. (See note below for why entrenched desktop GPU players may not have much of a lead, but mobility/low power IP core licensing has strong opportunities.)New actuator technology, like bone conduction, needs to be examined a little closer. There are things that can be done here with multi-actuator setups that don't seem to have been explored yet. Companies that already specialize in BC speakers (Dayton for example) should see a steady flow of custom design RFQs. Somebody's going to eventually go towards a powerglove type solution (again!) and incorporate gyroscopic actuators (spinning inertial platters) for haptic feedback. Those that can make them small will have considerable play.In addition to passive optics, some, more mature and larger, technology companies are considering active optics. The usual approach is to use polymer thin film composites of nematic LCs and others organic/inorganic birefringent materials to construct patterned active layers sandwiched between glass and TCO layers to construct waveguides. An example may be DO.Biometry companies and statistical human body model companies should also see better margins. The fashion market will dictate a few trends here, but I have no idea where to even begin with that.Biocompatible polymer companies should also see a surge in inquires depending on the weight and fit of the final design. Fitbit had issues. So I expect 3M or Eastman to pick up a piece of the pie, along with their traditional plastics lines. There could be a move to natural fibers as well but I have no idea of who the players here are.The demand for software developers skilled in sensor implementation will continue its upswing, and I suspect to the point where many of these jobs go offshore. So off-shore consultants need to figure out how to incorporate such skills in their pools. Initially, it won't be as straight-forward as learning a framework - You will need graphics, mobility GPU, physics and EE/DSP skills.Big players like Intel will keep trying to force their ideas on people, and they may succeed. Entries of companies like Apple into the fray will definitely tip the carts for every other company not a part of their supply chain. You will see smaller companies without strong tech stacks disappear or get acquired in firesales.The receptivity of the Chinese market is also another huge factor. If a small player wins there, then the companies in North America may not have a chance to adapt to the China-internal supply chain efficiencies. But that's a generic statement true for many segments. What will be interesting is if AR/VR gets forced into a novelty segment like the Pico projectors were. And what I do know is that they, China, are looking at a very different, bigger, picture and running in a completely different direction from the US or the Japanese/Koreans. It may pay to not be caught inside the North California feedback loop.A general point - the chicken or the egg problem - Big manufacturing doesn't do small prototyping. But Small manufacturing requires cash upfront, and don't extend deep credit to small prototyping customers. Small companies that have big ideas but little to no credit generally find it difficult to execute on the actual vision - Some specs and tolerances are just proportional to money. Some manufacturing partners that look at the big picture and take on the risks may find themselves getting lucky. Hon Hai is doing just this. The ones based in the US or Korea seem to be only providing lip service.Physical FMCE retail stores like BestBuy, Apple/Microsoft Stores that showcase AR/VR and personalized wearables experiences are going to be critical as well. There is simply no other or better way to market such highly personal experiences. So capturing/partnering with distribution channels is critical if you don't have your own. This is also good for physical retail, because foot traffic+ecosystem sales. I expect BestBuy will do really well.2. Supply chain dependencies.GaN GaAs wafer and quality, availability and scaling to meet consumer demand.Efficient illumination sources (LEDs, laser diodes). For non-emissive sources, the light has to be generated by something. Most people are unaware of how critical illumination sources and the semiconductor companies that make them are. There are maybe two, at most three, orgs that can actually provide sources at scale.Precision diamond turning equipment for lens and mold manufacture.Micro forming/micro deep drawing/micro injection molding and stamping tech IP. Apple is the only Co with an internal specialist group on this. Most people don't understand this end of the business, but it's critical because miniaturization/compactification requires heavily non-ISO parts for fasteners (screws, for example: low strength steel Y-wing screws, w/ 0.2mm threads on Apple Watch), latches and rocker/slider/MOM switch mechanisms and custom connectors (eg. hidden 6 pin port on Apple Watch).Foundries with stable MEMS, FinFET, <20nm design rules and processing. Anyone can flat out say GlobalFoundries, TSMC and Samsung Semi are going to make money. Their recent investments (>$50B aggregate, 2012-now) reveal the same.Sensor design IP accessibility - The IP needs to not just be good, but defensible against frivolous claims. Verified/reliability approved designs are immensely valuable.Sensor fusion/SLAM algo IP accessibility. Many AR/VR hw orgs start building MVPs using barely functional, first-principles algorithms. The bare functions from open-source libs are okay, but don't scale well. It's easier to pay for specialist IP than develop self-calib. routines or wait for years to get SLAM working correctly. Example - Bradski was one of Abovitz's first hires. There's about 100 people in the world with the know-how to do this well. Chances are, they are not in your team.3D packaging IP and industrial equipment, for System-in-Package design. Very few orgs specialize in 3D packaging design/simulation.EMC/EMI and EDA simulation tooling. Critical because of body-proximity and tight packaging with components physically on top of each other.Wrap around ultra-high-density flex, materials and manufacturers. For flexes to be 'wrappable' or bendable, the traces should not fail when bent but be thin enough to bend easily. These are antagonistic design criteria and difficult to manufacture reliably at scale.Small form factor interconnect manufacturers. Another thankless, behind-the-scenes role, but this is the reason devices become thinner or smaller, not Moore's law. Smaller semi chips do help, but reliability in IC packaging, molds and interconnect technology is what allows the signals to transfer. Very few specialist companies can handle high volumes custom interconnects.Bandwidth/fiber/telecom companies. Also multi hop/mesh network h/w f/w developer companies. IoT needs a display and AR/VR setups along with smartphones may be it. There's a huge effort behind the scenes for control of this space which is seen as high growth. Entrenched players are leading this effort. Not all are traditional networking companies.Human factors analysis - Human bodies have statistical dimension distributions - Dreyfuss' text Measure of man and woman doesn't give you digitally usable data. Companies specialize in collecting statistical body data and rendering them to useful forms (including CAD, finite elements and point clouds). Then there are gait, form, structure, posture datasets that designers love to have. This industry is bound to benefit from all kinds of wearables.Fab cycles for fabless, smaller players.Polymer orgs and their consultants. Polymers and specific recipes are used in everything from mechanical to optics to packaging to flexes to fibers. Dupont's Kapton XC used on Apple's stealth flexes is an example.Industrial optic metrology/lightfield homogeneity/illumination PSF measurement tool makers. Cohu/Delta design for line equipment, Nikon, Hexagon, a bunch of European companies.Regulatory policies on eye fatigue/illumination exposure. There will be guidelines issued and there will be some posturing. See note on health risks below.Physical retail stores for distribution channelsNote on regulations and health risks (cc. Colin Jensen)Head-Mounted devices will fall under both the FCC and the FDA mandates (21CFR1000.15). Device-makers will try to force an FCC-only oversight by disavowing any medical applications - but the FD&C sec. 531 makes it clear that such devices should require FDA oversight. However in the end this may be a matter of legal clarification that really cannot be predicted.FCC regulations are murky and I am unsure of how the agency arrives at its conclusions. FDA operations are clearer, or maybe I am just more familiar, so I focus on that.Broadly - The first gen products will have standard disclaimers and warningsremoving/reducing liability (<30mins of usage, >16 yrs of age etc). The FDA as a matter of practice does not 'go after' nascent businesses unless it is egregious, or it receives complaints or reports of concern from the practicing medical community. It will first study the physiological effects under expert oversight, then issue guidances, finally create regulations and standards that equipment manufacturers would have to follow. This takes time and industry comments are always sought and taken into consideration.Unless deemed absolutely dangerous, commerce may be allowed until guidances areissued. If the expert panel finds significant chances for injury to occur, it will select a classification for such near-eye devices (Class I being safest, III being worst). Thenear eye industry will fight to get their products classified Class I at worst.The LCD/DLP based devices may be declared class I depending upon the raw lumens they output among other factors. Many AR devices may actually not require classification at all, specially if they are battery operated - Because limited power restricts the illumination output. However this is not true for tethered devices such as those from Oculus. They may try to increase their display brightness which may increase health risks (see below).Google/Magic Leap, if they use laser diodes as sources, may get a Class II (mandatory 501k filing since there are no equivalent products that shine lasers into your eyes for long periods of time). The increased regulatory costs may be passed down to the customers. Regulation may also force such tethered/high lumen near eye devices to be sold under medical or occupational guidance etc.Health risks - Broad categories are(A) Simulator sickness(B) Repeated/prolonged exposure.Simulator sickness is an ongoing discussion - Some are of the opinion that initial sickness can be overcome through sitting+acclimatizing over time. They assume that this initial barrier won't restrict adoption. I disagree, but won't dwell on this.Prolonged exposure - The physics of head mounted systems, with their brightness, near eye lensing, moderate temperature, pressure and electromagnetic exposure dosage and the straps+weight of the system should definitely require physiological/ ophthalmological oversight. For example, some wrap around strap-based designs do increase mechanical pressure or physical stress on superficial temporal, occipital and angular/facial veins/arteries. The physics (inertial motion, eccentric mass distribution) of head mounted gears will lead to conditions similar or worse than what commonly cause 'helmet whiplash' or gorilla arms for controllers. Also RSI is to be expected with unsupported necks, arms.Note these are the obvious issues - there will be others such as those pertaining to blink reflex, ocular hypertension, trauma on general orbital systems and many others. I don't have expertise to assess these.The risks will have to be evaluated against the benefits. Many concerns may be addressable through appropriate engineering and quality control. I only hope that systems designers follow a 'do no harm' or ALARP principle and consider such issues in their system design.Is medical regulation a barrier to entry? Depends. Not for most players, but certainly for the homebrewers. The onus is on the very first devices to show that they are not dangerous. After that every other manufacturer only needs to claim equivalence with the first device. This may still deter some players not familiar with policies and hinder investments in such players.NB - A class III classification is highly unlikely, but could prove very dangerous to the AR/VR industry.However I expect there will be no regulatory roadblocks until at least 2018 or later.Note on content channel and app-store lock-in'sAll major players (Microsoft/Hololens/Windows Store, Google / Magic Leap / Play store...) will have their own content channels. Hard to expect Steam or any non-affiliated channels - those not affiliated with either a major h/w or OS maker - to succeed; Example: Leap Motion, Amazon's app store didn't do too well. Another example, developer's funds - like those created by Highland for Leap motion don't work out to create a natural ecosystem when competing against integrated app stores.A generic, system-independent app store faces huge issues, as was initially demonstrated by the fragmentation in Google's play store where many apps didn't work the same or at all on different versions of android phones. So platforms like OpenVR face big obstacles. First gen h/w products with heavy subsystem and performance fragmentation at the h/w and OS level cannot simply be virtualized away in software at the outset imo.This point becomes pronounced specially if your display tech just uses different physics to get the information into the users eyes (Magic Leap - lightfield, Oculus et al - stereoscopy displays, Hololens TBD) or uses different types of sensors and recognition algorithms with different levels of resolution/accuracy to capture input. If your immersion/ interaction stacks are different, you would introduce huge inefficiencies/bloat in trying to virtualize all possible hardware combinations into a single API. Some UI choices may stop making sense on different stacks/controller choices.On why desktop GPU companies don't have a guaranteed taxBecause they are not common to all AR/VR graphics pipelines.Gaming based VR rigs may need them, but baseline h/w margins are low and the larger market is not receptive to premium performance h/w.Market drivers may be low power/small footprint/mobility and cost, once baseline performance is guaranteed.Many h/w companies would prefer to develop their own low power SoCs eventually, specialized to their own tech stack rather than pay premium for generic systems (see point about integrated verticals; Example-Apple and PowerVR/Imagination Tech IP, Cadence/Tensilica, Synopsys..). Traditional GPUs are good for generic development with hardware abstracted away, but performance optimization starts at architectural design. [I am a biased rotten h/w guy, you can tell]You can mix and match IPs and foundries. Sub-20nm FinFETs are now stable and a big deal for next gen mobility SoCs. So depending upon an org's commitment+strategy, it's not that difficult to develop application-optimized graphics engines at the system design level while still getting next gen fab. Reducing generational dependencies on external architectures is generally a good idea and gives you better control in targeting experiences.I think the position that 'desktop-class GPUs are mandatory' comes from assuming that VR/AR==Performance gaming and Oculus; Not casual gaming, not enterprise, not productivity. I doubt this.From a display h/w perspective - Pixels need sufficient physical transition time to be able to switch really fast. Only microLEDs have the kind of refresh rates mentioned in Binstock's article in engineering stable designs. And liquid crystals don't switch that fast, because hysteresis and its a mechanical torsion we are talking about. Ferroelectric LCs do, but are not out of the labs yet. DLPs can, but they have ringing and pull-in stiction issues that they have to spend more time in fixing otherwise you get jitter and color breakage (smaller the pixels == worse that problem). That's one of the reasons why solid state photonics is such a huge deal. LC based displays may work for AR with selective region based decompression/refresh, but AR doesn't commonly use LCDs. Please note the disclosure below.So current gen displays don't support insane, >90Hz-peak refreshes - This implies that there's no need for 120FPS GPU support.Magic Leap's LBS tech may also surpass the refresh rates (90Hz) easily, and they would definitely need a custom SoC to fit in their mobility design.Note on '150Hz' FPS that eyes can detect - That's a number that apparently a WoW gaming forum spewed out and I believe is inaccurate. Eye's don't scan in consistent frames (see Andrew Watson's work at NASA on physiology of display perception, or Holmqvist et al, Eye tracking-comprehensive guide to methods and measure 2015). Physiologically, if we had to converge to a number it would be much higher and we risk forcing multiple concepts of biological photon sensitivity, contrast response and vision/motion perception onto a not very useful generic concept of hardware frame rates.The actual physiological processes of vision perception are very relevant to near-eye display people, mind you. But that's another discussion in another place.I realize I focused the most unglamorous aspects of product design and lifecycles here, but hey, this is what it takes. Some poor sod has to sit down and figure these things out so we can dream our rainbow dreams in powerpoints, clean APIs and rendered CAD images.I deliberately avoid taking specific names, except GlobalFoudries/TSMC/Samsung Semi, because lock-in depends on who's willing to take the biggest bet on this market. Many orgs that have a play here haven't shown deep commitment yet, except maybe Microsoft and Google.Disclosure: I design, build, analyze Micro-electromechanical systems and optoelectronic hardware for display/imaging technology used in ultra-miniature (pico) light engines and sensor technology for human-computer interfaces at Ostendo, a display technology company in SoCal that also makes near eye products. I invented a sensor+algorithm stack called deepSense with applications in AR/VR/Biomed/robotics and real life. It's just the greatest thing since Nyan cat. I am fully of that opinion.

Comments from Our Customers

Very Easy and simple to transfer all kind of data!! Good Software!

Justin Miller