Interface-Based Service Composition With Aggregation: Fill & Download for Free

GET FORM

Download the form

How to Edit Your Interface-Based Service Composition With Aggregation Online With Efficiency

Follow the step-by-step guide to get your Interface-Based Service Composition With Aggregation edited for the perfect workflow:

  • Select the Get Form button on this page.
  • You will enter into our PDF editor.
  • Edit your file with our easy-to-use features, like highlighting, blackout, and other tools in the top toolbar.
  • Hit the Download button and download your all-set document for reference in the future.
Get Form

Download the form

We Are Proud of Letting You Edit Interface-Based Service Composition With Aggregation In the Most Efficient Way

Take a Look At Our Best PDF Editor for Interface-Based Service Composition With Aggregation

Get Form

Download the form

How to Edit Your Interface-Based Service Composition With Aggregation Online

When you edit your document, you may need to add text, fill in the date, and do other editing. CocoDoc makes it very easy to edit your form into a form. Let's see how this works.

  • Select the Get Form button on this page.
  • You will enter into our PDF editor page.
  • Once you enter into our editor, click the tool icon in the top toolbar to edit your form, like adding text box and crossing.
  • To add date, click the Date icon, hold and drag the generated date to the field you need to fill in.
  • Change the default date by deleting the default and inserting a desired date in the box.
  • Click OK to verify your added date and click the Download button for sending a copy.

How to Edit Text for Your Interface-Based Service Composition With Aggregation with Adobe DC on Windows

Adobe DC on Windows is a popular tool to edit your file on a PC. This is especially useful when you finish the job about file edit in the offline mode. So, let'get started.

  • Find and open the Adobe DC app on Windows.
  • Find and click the Edit PDF tool.
  • Click the Select a File button and upload a file for editing.
  • Click a text box to adjust the text font, size, and other formats.
  • Select File > Save or File > Save As to verify your change to Interface-Based Service Composition With Aggregation.

How to Edit Your Interface-Based Service Composition With Aggregation With Adobe Dc on Mac

  • Find the intended file to be edited and Open it with the Adobe DC for Mac.
  • Navigate to and click Edit PDF from the right position.
  • Edit your form as needed by selecting the tool from the top toolbar.
  • Click the Fill & Sign tool and select the Sign icon in the top toolbar to make you own signature.
  • Select File > Save save all editing.

How to Edit your Interface-Based Service Composition With Aggregation from G Suite with CocoDoc

Like using G Suite for your work to sign a form? You can edit your form in Google Drive with CocoDoc, so you can fill out your PDF to get job done in a minute.

  • Add CocoDoc for Google Drive add-on.
  • In the Drive, browse through a form to be filed and right click it and select Open With.
  • Select the CocoDoc PDF option, and allow your Google account to integrate into CocoDoc in the popup windows.
  • Choose the PDF Editor option to begin your filling process.
  • Click the tool in the top toolbar to edit your Interface-Based Service Composition With Aggregation on the needed position, like signing and adding text.
  • Click the Download button in the case you may lost the change.

PDF Editor FAQ

What do you advise while coding? Coding the whole program then managing the exceptions or doing both at the same time?

That depends on whether or not I know what I'm doing. When trying out new frameworks, domains or languages, I tend to code a “thin” prototype, to reach a proof of concept quickly. That means long codeblocks, repeated patterns, stubbed exceptions, dirty loops, whatever it takes. And “log only” methods that could lead to improved behaviour later on (validation, service lookup, exception handling etc) It is not entirely clear at that point if my approach is correct or if I'm using anything the way it is meant to be used, so any polishing would be wasted. However, logging should be in place nonetheless.Once all (or most) foreseeable problem fields have been touched, the prototype should be thrown away. Sometimes I'm inconsequent and settle for heavy refactoring instead. I should know now where the app will grow, where abstractible parts are and what kind of architecture I would like.Then, or when I've already done something similar in the past, I'll design my root elements: Config, Model/Entities (which contain domain data) along with some first queries, DAOs or repositories, Services (which contain technical functionality, including logging, persistence, technical messaging) and service lookup, persistent processes (if state engines are required, which is usually true), rules/validators, business/domain exceptions (which can happen if you do something wrong) and UI basics, like error message presentation, a login, logout and most importantly: An off switch.I'll then implement one stack. One real entity with an ID and starting with one field (usually Person with Name), one view, one controller, one repository, one stub service, one message type, one table, one of everything I need. If it is at work or an important part of a private project, that's also a good moment to add unit tests and class comments. The stack will get the clean code variety, with methods for repeatable or abstract parts, proper naming, finals, systematic parameter lists, logging, externalized strings, custom exceptions, AOP, Interface based access, generated code etc. Most of this will move up to base classes or invocation handlers during the subsequent implementation.Once everything feels nice and neat, functionality can be expanded, like component structure, templating, module config overlay, rule based validation, action/model visibility permissions, model relations (1..n, m..n, composite, aggregate), not everything possible but most things expected.This is the point where I would put most effort into technical exception handling, exception base classes, recoverable/ non recoverable exception types, exception presentation etc.After that, fanout can begin. With every addressed usecase, more and more model entities, rules, exceptions, views, message templates, services, controllers, jobs will be added. Every now and then, the base classes need to be extended because some feature needs something new. Naturally, corner cases and bugs will show up so the base stack implementation needs to be improved or stack variants will have to be made, but the less variants exist in abstract base code, the better.After a while, new Entities, Rules, Services etc. should have minimal, if not zero, impact on the base classes. Unfortunately, you can't start on this complexity level right away, as it will be oversized and clumsy for small, starting projects. However, in a product line, several projects can use the same codebase.Usually, management will structure their plans in a way that contradicts or prohibits this somehow. Also, once the codebase is evolved enough to allow you to quickly churn out projects from your reusable base classes, there will be a new language, framework or team forcing you to start over from scratch. I would prefer this approach anyway, because even then, your code will be structured better.

What are 10 IoT startups that are worth keeping an eye on in 2016?

Startup life is not about the spotlight. It’s about disruption. It’s about innovation. At 1000 Angels, it’s our job to surface some of the most exciting early-stage investment opportunities.Among some of the startups we predict will make a mark, we have identified 20 companies in particular to watch in 2016. The selection criteria was based on the business lifecycle, founding team, traction, and investors backing them.InverseFounded in 2015 and based in the Bay Area, Inverse is a technology-driven media company covering the latest news, TV, movies, music, digital culture, science, and innovation for a millennial male audience. With over 20 employees, some of its investors includes Greycroft Partners, Crosslink Capital, or Rothenberg Ventures.MobcrushMobcrush was founded in 2014 and is a mobile live game streaming platform and community. Their vision is to inspire the world’s many personalities by providing a simple, accessible solution via mobile devices. The company has raised $15 Million from Venture Capitalists Kleiner Perkins CrunchFund and Lowercase Capital.Product HuntProduct Hunt is a daily leaderboard of the best new products. It is one of the most influential platforms at the moment. People in tech as well as Venture Capitalists leverage this tool to scout new startups. It is the go to place to announce the launch of a product. Some of its investors include SV Angel, Naval Ravikant, or Greylock Partners.PatreonFounded in 2013 and with over $17 Million in startup fundraising efforts from Alexis Ohanian or Sam Altman, Patreon is a platform that allows content creators such as artists, YouTubers, photographers, musicians, authors, writers, and so forth to set up pages for their work and get support from their fans. The community is currently comprised of over 2,000 creators. Based in the Bay Area, the company currently employs over 40 people.NextbitThough not yet available to the public, Nextbit is taking an interesting approach with its cloud-based smartphone, supposedly scheduled for release around the first quarter. The company has created an Android smartphone like those from Huawei, HTC, LG, and Samsung but with one big difference: It offers unlimited cloud storage. The idea seems to be that your data is always available, regardless of what device you’re on.Right now, when you uninstall an app, all of that data is gone forever. But Nextbit’s syncing technology will let you “pause” the app so that the data is no longer transmitted but can easily be reinstalled. Nextbit has started accepting preorders for its inaugural mobile device, called Robin. It’s available for $400 and comes with a Snapdragon 808 processor, 3GB of memory, a 5.2” 1080p IPS, a fingerprint scanner, a 13-megapixel rear camera, a 5-megapixel front camera, and amplified dual speakers.Earlier this year, the company also debuted a product called Baton, aimed at helping developers explore the possibilities of this new cloud-syncing world.Brigade2016 is an election year in the U.S., and there’s no shortage of issues on the table. For those interested in doing more for the democratic process than simply casting a vote, Brigade wants to help.Started by Napster cofounder Sean Parker, Brigade has a platform that encourages civic action and empowers users to seek reform. In October, the company unveiled interactive ballot guides in San Francisco and in Manchester, N.H. to educate voters about ballot initiatives and candidates and to show users which of their friends had similar views or were supporting a particular cause, issue, or person.As we approach the general election in November, it will be interesting to see how Brigade can mobilize armies of citizens to get out to vote.YPlanNeed to figure out what you want to do today or later this week? Have you givenYPlan a try? This London startup launched in 2012 and has since expanded toNew York (in 2013) and San Francisco (in 2014). Its event discovery service, which competes with WillCall, Sosh, and others, is hoping to build a $1 billion business that aggregates happenings from Eventbrite, Ticketmaster, Google, Timeout, and others.Company cofounder Rytis Vitkauskas told VentureBeat’s Paul Sawers that YPlan is planning to launch “in bunches of cities at the same time, and it will be in partnership with other brands and large distribution partners that will help us get up to speed in terms of visibility.”Vitkauskas suggested that YPlan may be looking at other verticals, including perhaps meals. Whatever its next move, the company is focused on areas that can help it reach that billion-dollar mark.FuseFuse is promising to make collaboration between designers and developers easier. The company offers a tool, which recently became available to the public, that lets developers build apps similar to the way designers work in Photoshop, Sketch, or After Effects. It provides a “What You See Is What You Get” (WYSIWYG)-like editor that’ll work across various platforms. Fuse also offers real-time updating so developers don’t need to constantly output versions of their app to see how it runs — just one version will do for testing.OperatorBehold the proliferation of virtual personal assistants. Joining Siri, Google Now, Cortana, and Facebook’s M is Operator. Started by Uber cofounder Garrett Camp and former Zynga executive Robin Chan, the app launched in November but is not widely available.Chan told Tech Insider that Operator is centered around three tenets: using messaging for purchases, managing a logistics layer for moving goods, and capitalizing on the ubiquity of smartphones. Type what you’re looking for in a text message — restaurant reservations, tickets to a show, a gift — and a human being will respond to help find what you’re looking for.Operator will have to deal with scaling its business, as it relies on human beings to process requests instead of using artificial intelligence. In addition, the marketplace is filled with on-demand user services — even Facebook’s M is a mixture of AI and human work. Right now Operator, is only available on the iPhone, but we could see an Android version coming soon, and expansion into international markets.DistroKidOne of two music services on this year’s list, DistroKid is the brainchild of serial entrepreneur Philip Kaplan. It’s a music-distribution service that helps musicians get their work on iTunes, Spotify, Google Play, Amazon, and more than 150 other stores. Over 25,000 artists use DistroKid today, paying $19.99 a year.After Pandora’s recent purchase of Rdio’s technology (for $75 million) and incorporation of some of its staff, the digital-music space could enter a transitional period that causes independent artists concern. It’s already difficult to get distribution without a major record label, and DistroKid wants to eliminate that frustration and help promote artists.In 2016, we could start seeing major record labels paying more attention to DistroKid and other emerging music startups, perhaps through acquisitions.LayerLayer brought forth its communication platform at an interesting time — just as messaging is growing in popularity, people want their real-time conversations to take place wherever they are. With Layer, developers can give their apps capabilities to let users talk with one another or incorporate a customer-service layer. And with its user interface kit, SDK, and API, there are many tools developers can choose from.In 2015, the company launched to the public and created a fund aimed at investing in app businesses that build on top of its platform. It also rolled out a turnkey messaging platform and a user interface tool called Atlas.Light.coWe typically take photos through our smartphones, and the picture quality increases with each new model. Apple has been upping its camera technology with each release of the iPhone. But they still lack some features you get with a dSLR camera, particularly the quality of the different lenses. Yes, you can zoom with a smartphone camera, but the picture just doesn’t come out right. And what about arranging different compositions?Light wants to take the joys you get from a dSLR camera and put them into a device the size of a smartphone. Last fall, the company announced its L16 product, priced at $1,699. It’ll start shipping sometime next summer. The camera will capture a moment in time by taking multiple focal lengths simultaneously and then fusing them together to create a single high quality image up to 52 megapixels in size.VivStarted by some of Siri’s founders, Viv unveiled its artificial-intelligence ambitions in 2014, but its product has yet to appear. Viv is looking to take on Apple, Google, and Microsoft with an AI service it describes as “a global platform that enables developers to plug into and create an intelligent, conversational interface to anything.”When Viv does emerge out of stealth, it will be interesting to see how developers receive the product. Most users already have a digital personal assistant built into their smartphones, whether it’s Google Now, Siri, or Cortana. Can Viv’s AI be opened more widely to businesses to harness greater AI powers than what’s currently on the market? And if Viv fulfills its promise, could it become a prime target for an acquisition by a tech giant? 2016 may answer these questions.BurnerBurner provides disposable phone numbers, but the company is making the mobile app do more than send and receive calls and texts. In 2015, the company began giving developers more tools for its platform, starting with native integrations with Evernote, Slack, Dropbox, and SoundCloud. Soon after, itreleased a new option for developers to build their own custom integrations, in a step toward really opening up an ecosystem.Burner aims to make your phone number a conduit for data so you can do much more with that unique identifier. How it will continue to fare against traditional telecommunication companies will be interesting to watch, especially as Burner is building up an assortment of custom, automatic integrations with other apps for users and developers to take advantage of.Magic LeapAugmented reality was a hot topic in 2015, with many speculating about the potential of Microsoft’s HoloLens, Atheer’s glasses for the workplace, and many others. One promising company, Magic Leap, recently announced it had raised $827 million for its unreleased product. Whether they’re used for games, industrial hands-free work, or personal enjoyment, Magic Leap’s devices will likely offer a whole new perspective on the world. The thing is, no products have really been made available for public consumption so far. That could change with Magic Leap next year.LiveListLiveList functions as the equivalent of the TV Guide for livestreamed events, allowing fans to follow their favorite artists online. Launched in late 2015, the service focuses more on professional livestreams, but perhaps the company will start adding in user-generated content like that found on Meerkat, Periscope, or Facebook Live.In chatting with the company, one of the things that came across was this notion of changing the music experience. Livestreaming will likely become a popular distribution tool for artists, whether they have a record label or not. Events like Coachella are already doing professional streaming, but what about individual artists such as Adele, Taylor Swift, Coldplay, Roem Baur, or independent artists that want to offer fans a continued experience well after they leave a concert?At first glance, a listing service for livestreaming may seem like the equivalent ofYahoo’s Video Guide app, but there’s additional potential for the service, such as offering fans various vantage points when they stream concerts in their living rooms or making merchandise and music tracks sales directly from the livestream. The question for 2016 is: Can LiveList realize that potential?DroneDeployDrones have become so commonplace that the Federal Aviation Administration recently enacted regulations requiring these unmanned aerial devices be registered. But that hasn’t stopped companies from exploring the potential of commercial usage — just look at Amazon. DroneDeploy creates software for these vehicles, providing aerial mapping. Users can get data outputs without having to wait for hours.The company raised $9 million last year and moved its platform out of beta. In doing so, it’s making drones more accessible to industrial companies that may want to legitimately use DroneDeploy’s technology to help their business with tasks like surveying land or inspecting property.3ScanYou may not have heard about 3Scan, but its market could matter to you. The company has upgraded the task of analyzing cells and tissues from analog to digital technology and is modernizing the way doctors, researchers, and biotech companies examine tissues. 3Scan provides digital renderings of biopsies and other tissue samples in 2D and 3D through the use of its Knife Edge Scanning Microscope. The company has raised more than $7 million in funding over the past four years from the Thiel Foundation, Data Collective, Dolby Family Ventures, SK Ventures, and others.Crew32Little is known about Crew32, the newest startup from Jason Nazar. The company is focused on the small business service industry and has already raised $5.2 million. Nazar’s previous venture, DocStoc, was a document-sharing service that Intuit acquired two years ago before shuttering it in 2015. Nazartook to Facebook soon after the shutdown to explain the reasoning behind Intuit’s actions, stating that despite his best efforts, “I got hit with resistance at every turn.”Besides Nazar, the company counts BetterWorks cofounder George Ishii and http://Investd.in cofounder Yadid Ramot as cofounders on the team.Not all of these companies are going to be guaranteed breakouts in 2016, but there’s something intriguing about each one. If nothing else, they are certainly worth watching, as it’s our opinion that they’ll have some major news sometime in the next 12 months.[[source : 15 interesting startups to watch in 2016and 20 Startups To Watch In 2016 ]]

Are there any companies that are likely to "tax" the VR/AR market? What supply chain dependencies exist, if any?

I start with an attempt to define and restrict the scope of terms such as AR, VR, tax that are nebulous and context dependent. Then I examine the current state from a historical and market perspective. Then I consider the actual h/w, identify stable and unstable components, followed by a look at supply chains & opportunities. I end with a few discussions on physiological issues, GPUs and captive content distribution channels.I discount content developer opinions and selective pre-order volume based metrics here - Their (A) enthusiasm is driven by assumptions of first-mover advantages on immature platforms, & (B) are not locked-in to platforms & will switch to largest player when that makes sense.Definitions and restrictionstax - A company X taxes a market Y, when X maintains the most immediately useful tool-chain combinations of IP/Services/products such that X may assume a default position in any product pipeline in Y.Example: X=AWS (from Vaibhav Mallaya's comment) for Y=cloud,Example: X=Texas Instruments for the Y=home-projection market.Note:For such taxes to exist Y needs to be mature.Taxation is never only a matter of technology superiority but requires perception engineering and X's active participation.Companies need to react in real time to developments in YExample: TI's tactics in creating a DLP monopoly (InFocus case)Example: Sharp/(MITI)'s decline+failure to contain Taiwan LCDNote: AR is not equivalent to VRVR can use non-transparent display substrates, AR may not.VR needs far greater calibration between the human visual system and the display, AR does not. Consequently VR needs more computing than AR does.VR hardware is easier to produce but harder to manage, over AR.AR may be made sleeker than VR.AR is mobile, VR may be static.AR is a generic term - can mean vision, audio or haptic, but popular press prefers to relate it to vision. VR is primarily based on the visual system.This distinction should make it clear that these are two different types of supply chains. VR due to its 'welder's mask' design approach doesn't need to be too sleek, consequently it is easier to manipulate existing component lines to work with VR. AR relies more on sleekness and wearability. So AR needs highly custom compactifaction - Something that no one has achieved yet, but I know all are working on this. Apparently there's also a new category that converges these two segments called Mixed reality.Another point requiring specificity - I imply head mounted or near eye display based interactive technologies when I talk of AR/VR devices. The device has to have features of wearables and has to attempt interaction with the environment without specific or manual input. Apple Watch/Fitbit/Kinect/Immersive earphones/that multi-speaker set up in your garage are not AR devices. A multi-projector PicoWall type immersive display does not qualify here either.An opinion on the origins of the current AR/VR cycle2007-2009 iPhone. The adoption numbers shocked all big players not part of the Apple supply chain. Specifically HP/Intel/Google/Microsoft. They get hammered by investors. Pressure on them to manage perceptions and since they were losing the mobile war, they had to create new product segments. The pressure increased exponentially around 2012 when mobility products start eating away what was considered their safe, stable product lines - desktop and enterprise.The formative years for AR/VR were 2009-2012 when everyone in the industry started taking on risks, helped by a stabilizing economy and consistent/low rates. That's when we started seeing the zero-th generation of AR/VR noise - 2009 Microsoft Kinect, 2010 Intel IoT alliance, 2011 Google glass, 2012 Google Tango/Oculus/LeapMotion etc.Most of the investments appear to have been based on 'not getting left behind, again' fears. For example, its safe to say in 2016 that Kinect/LeapMotion is no longer considered a unique, or relevant entertainment experience. Glass was abandoned because Google did not see it becoming a consumer product. Oculus was acquired in a Pay-to-Play move by a social media company. etc.[I have ignored the relatively unknown, but interesting companies that existed for brief moments in this space. There are many.]I can only conclude that the industry is force-driving the AR/VR wagon, not market demand. The risk we run is in killing the niche fields where AR/VR truly are utilitarian. It's certainly possible that we are mistaking curiosity for intentions. So 2016 is going to be the bellwether year for 1st gen products. This is the year we get to determine what the markets truly care for and wants as features.Y=AR/VRNext we draw some boundaries around the AR/VR market:There is no market, no real numbers only WAG projections. No perception of utility in consumers. It's not even certain that near-eye systems will ever steer outside enterprise or niche applications. Gaming does not provide volumes if it costs far greater than ~300 USD (a reasonable, consumer-accepted baseline). We may be talking of VR/AR-arcades, but not much of a market.Market will coalesce around the niche 2016-early-adopters seem to be the most excited about. Most seem to focus on what it does for the enterprise.From the perspective of supply chains, this takes time. After initial low rate prototyping, Silicon/mechanical/electrical/integration/testing/shipping takes 2+ years before economies of scale, design version stability and reliable demand projections kick in. So supply chain coalescence may be expected about 2018/2019.Companies that tax, X won't show up for 4/5 years from now. 2020 being the earliest. There will be a lot of noise in between.Always black swans like Magic Leap. They have made the most respectable moves in the market along with Microsoft so far - Respectablilty based on engineering planning and strategic foresight of their moves.So my predictions have a 2020 horizon.Hardware subsystems and units in the chainLet's now consider the typical systems such devices need. An incomplete list follows. Items in the list may be lumped together into custom subsystems (for example, the S1 PCB in the Watch, which Apple marketing described as a 'chip' initially and now call 'SiP').Generic systemsPower/PMICsPower/BatteryPower/AmplifierEnvironment/Color/light/Humidity/Microphone arrayEnvironment capture/CameraMechanical/ChassisMechanical/InputElectrical/touchElectrical/InterconnectsMechanical/Flex boardElectrical/MainboardElectrical/Antenna and flexLocation/GPSTelecom/baseband processorsWirelessLAN/Wifi/PAN/BT/WLAN/PAN/non-standard radio/opticalMemory/NAND flashMemory/DRAMProcessor/SoCProcessor/Interface controllerProcessor/Video/graphicsProcessor/ASICs for custom subsystems, may be integrated into main SoCOrientation/Inertial/MagneticComponent level manufacturing/TestingSubsystem assemblySubsystem packagingDevice level packagingDevice level assemblyCleaning/testingQC metrologyPackagingDisplayOptical/Display optics - Always customOptical/Digital light engine/DLP/LCoS/uOLED/LBS/LCDOptical/ASIC or COTS drivers specific to digital light engineInputInput/Gesture/RGBD cameras/Shaped light TX+RX pairs/Input/Gesture/HF acousticsInput/Gesture/ASIC ProcessorInput/Gesture/Touch-free capacitive/EfieldInput/Gesture/Capacitive touchInput/Gesture/x-IMU or air mouse/pointer type devicesInput/AudioRX/Speech/MEMS Microphone/ArrayOutput/Haptic vibe/Bone conduction/Custom hapticsOptical/Gaze/IR illuminated/High frame rateOptical/Gaze tracking ASIC/processorAudioRX/Environment/MEMS/PiezoAudioRX/Environment/Directional collocationAudioTX/mech/Bone conductionAudioTX/acousticAudioTX/multi-channel acoustic pipes/immersiveFirmware (Some require RTOS eg. baseband or dedicated processors such as IMUs with some predictive filtering)Firmware - Algorithms/Sensor fusion subsystem/Maybe in ASIC or analog or digitalRecognizing that everything can be customized, which adds time + money + uncertainty, customization is typically minimized. Most of the items that appear on the list can be permuted/combined into groups that make sense only to the systems groups/designers.Items, like cleaning/testing/assembly may occur as needed and do add cost and time. Once you know how many units you think you are going to move, you can add automation, assembly robots, additional metrology, rent/acquire facilities, increase staff as required. But designing/programming/testing/ensuring regulatory compliance all take time and need to be factored. ODMs typically handle some aspects of this.Looking closely at the factors that shape user experienceIt may be noted that most of the items that appear in the Generic subsystems segment on the list are commodity and there is considerable experience in manufacturing such highly integrated systems and have stable supply chains. No company will create new 'taxes' based on those components because of pre-existing entrenchment and options.The latter systems, that focus on display (immersion) and natural interface experiences (interaction) are the difficult problems in AR/VR. So if anyone wants to levy a tax on the VR/AR market they need to focus on hardware or software IP for:Immersion,Interaction.From the VR supply chain perspective, some processing/tasks/interfacing may be handed over to tethered desktop systems (example: Oculus). A key issue is will the market like tethered wearable experiences? [I don't think so - the markets vote towards mobility in wearables. But there may be differing opinions on this. Possible that content becomes so pretty that people don't notice a tethered experience.] If tethered systems become the primary model, then entrenched performance computing players may maintain their lead.IMMERSIONThe immersion stack has static optics (lenses, assemblies, light pathways, filters/Anti-reflection/diffuse/specular coatings, waveguides, microlens arrays, optical beamformers, diffraction optics), a digitally controllable light engine and a processor that typically takes the RLE compressed image data from the graphics RAM and figures out how to switch the pixel states. Then there are display standards and algorithms and interfacing IP opportunities.Static optics - These are typically polymer, but may be glass. Generally molded. Sometimes machined. Requires high QA effort, but once baselines are met, results are consistent and easy to scale. Unless there's a unique lens manufacturing tech or coating materials science that optimizes volumes and costs with specific apps to VR/AR, you are unlikely to gain any traction trying to corner a market here. This is a high volumes, low margins business.Light engine - Some examples in AR are DLP (TI), LCoS (Holoeye/Himax), micro-OLED (e-magin), microemissive (LuxView, my employer), LBS (apparently Magic Leap, Microvision). You will find details on the technologies here.There are regular LCD/OLED display components that are used in VR (Oculus and maybe Meta), but they won't shrink and that's a problem.Each specific light engine technology has a unique approach to decoding and storing the video stream and updating the pixel state. Unless the light engine is a standard product like an LCD variant, the chosen display technology will force a buy-in into a whole ecosystem that includes processors and firmware and IP licensing.These typically constrain the form-factors of the final product design, so are one of the biggest decisions a designer has to take. The light engines/image quality are also primary factors that shape user experience.You may be aware that the displays are singularly the most expensive subsystem even in the highly commoditized smartphone market. So this is where the biggest opportunities are. These companies (and their partner orgs that handle integration) are critical to supply chains since they may offer fully-integrated, 'CAD-and-drop', custom designs. Once a market gets hooked to a certain device experience, they won't care for anything else. First STN, then IPS LCDs killing the possibly superior plasma tech or power efficient electrowetting/e-ink/Mirasol is an example.[Edit: A technical point on illumination - VR h/w are typically not contrast or brightness restricted since the display doesn't compete with ambient illumination. But AR (or mixed reality) devices need to be considerably brighter than the light admitted by the see-through optical visor. Since the visor is typically designed to reflect light into the user's eyes, it adds some inefficiencies as well. So the light engines and illumination sources (see below) typically need both high conversion efficiency along with native high lumen output. This is another point that makes the choice of light engine tech very critical to AR experiences.]Illumination sources - The light engines may also require independent illumination sources, color sequencing technology and semiconductor photonics/lighting (laser diodes, LEDs). There are entrenched players here and this club typically requires deep efforts and commitment. If you have new and unique IP addressing illumination, you will be immensely valuable.Display Processor - See light engine. If you have an ASIC team and contracts/IP licensing with GlobalFoundries/TSMC, you are valuable.Algorithms - Discussed later.The opportunities are big but an org needs complete control of all verticals here. You cannot source processed photonics from somewhere, optics from elsewhere and use COTS processors and still consider yourself valuable. There's a reason Magic Leap needs the funding it did. (The jury is still out on if that's enough). The more unique your stack, the more verticals you need to control.Let's look at examples - We know that Oculus and Meta are based on LCD technology. Companies like CastAR appear to be using DLP. I have heard conflicting opinions on the HoloLens tech stack - some say DLP, based on maturity and display contrast, others say not-DLP because DLP is power hungry [Edit: Apparently LCoS - still power hungry and inefficient illumination]. Microsoft's patents apparently indicate a homebrew, with ODG+Nokia Lensing IP handling manufacture.TI has had the DLP market cornered for a long time. Japanese/Korean/Taiwanese companies have LCD cornered. LCoS is still too spread out, there are possible consolidation targets there (specially after Google dropped Glass). LBS is risky, because human factors. Micro-OLED is not bright enough. Samsung has regular OLED cornered, but like LCD, OLED doesn't shrink magically. I won't comment on Emissive micro-displays. Micro-LCD is being worked out but Japan/Taiwan/Koreans can't be beat; only acquired - And they consolidate/disperse all the time.2. INTERACTION.The interaction stack is built on permutations/combinations of sensors and algorithms. The primary intent with AR is to allow the displayed digital content to interact with and react to a user's environment. This is not as critical for VR, but you still need to detect the user's posture to enable immersion and reactive display content (what Oculus refers to as 'motion to photon').Sensor hardware - Publicly documented hardware may include simple RGB cameras, depth-sensing RGB+D cameras (PrimeSense, RealSense, Kinect), MEMS ultrasonic array emitters/receivers (Przybyla's Chirp), RF TX/RX pairs (Project Soli), mini-IR/laser/Time-of-Flight sensors (Microsoft), MEMS acoustic microphone arrays (Akustica, Bosch), capacitive field sensors (Microchip), MEMS MARG (magnetic angular rate gravity, from AD, TI, InvenSense, Freescale...), pressure, altitude, sensors as arrays among many other types. These sensors enable the system to understand the environment and user intent.One key insight that I think is unique - You can pay for high-accuracy sensors but write crap algorithms, or you can pay for low accuracy sensors and write excellent algorithms to get similar results. Leap motion is an example of the former, and the Apple IMU team is an example of the latter. Google's acquisition of Lumedyne shows which way the industry is headed.Scoping the product experience is also very critical. Just because you can have 20 sensors on your device doesn't translate to a better user experience. Multi-sensor algorithms are sophisticated and still need much development. Many orgs like AD, Freescale, Bosch finally, after like 15 years in the wild, have started to include quaternion f/w with their IMU chips - but too late - they don't work on arrays (because uncertainty propagation, parasitic calib. errors).Algorithms - The algorithms themselves are a big opportunity. The image processing required from depth sensing cameras is still considered a heavy load and requires specific co-processors and optimized architectures. Two core problems - 1. Environment mapping/segmentation, 2. Gesture recognition.The algorithms+sensors are still not accurate enough. Since these sensors drive the display content, there is little scope in ever processing this data online (because latency) like how we deal with speech. The old IP that has been granted is fairly useless, though legally still menacing. I expect a new wave in fusion, SLAM and sensor hardware quality to negate this barrier.There needs to be more psycho-optics studies on how these displays and algorithms interact with the human visual system and how they affect us physiologically over repeated and/or prolonged exposures. I am not certain if everyone will perceive the same displays similarly or if individual variations in eyesight will ruin experiences for some. Example: How does dominant eye affect near-eye display perception? There is opportunity here for ophthalmology to drive a part of the discussion on algorithms design.You will also find a lot more snake-oil here than anywhere else. Academic research is very difficult to translate to production devices. The simulated CGI experiences that marketing shows us are not what people see in the wild (Kinect/ LeapMotion). That's one big perception hurdle to cross.Algorithm co-processors - The algorithms may be implemented in silicon + PROM (Microchip/MGC, Maxim/21100) or they may be licensed as firmware (Bosch). Such implementation opportunities may be converted into a segmented toolchain that may be licensed by independent or smaller vendors. In a way this would be similar to the fabless/foundry IP license and production models.Metrology and calibration - This is the equivalent of the hard/soft iron and lifetime calibration issues in IMUs. Error rates and reliability are very, very critical here since they ruin usability and experience. There are studies that pretty much predict why technological leaps such as the Glass fail in the real world (Dix et al, Human computer interaction 2005). Most of my hesitation in being enthusiastic about this segment comes from this UX perspective.So the company that makes a product that implements a few features right 100% of the time may actually take the spotlight away from something like HoloLens which implements lots of features that fail now and then. Graceful failures don't mean very much. Reliable failures do, but someone has to figure out how to make this work.[My personal opinion is that vision based approaches for gestures will never achieve the reliability that is required for high volumes adoption in the time-window it is required in. They are simply trying to solve the wrong problem. But I am biased.]Ecosystem opportunities - A big opportunity from a devices perspective is inaccessible to most small players - An ecosystem to use the AR/VR product inside. Only Google and Microsoft have this. And so would Apple if it decided to release something - which it still may.So that was the background. Let's answer the questions based on that.1. Companies likely to tax the AR/VR market.The usual light engine suspects - TI, if we go the DMD/DLP way, or the Korea/Japan/Taiwan LCD consortiums if we use LCDs or microLCDs. If LCoS makes an entry, then you may have Himax. Whoever controls GaN/GaAs wafer bonding and 3D stacking technology also has a play here. Samsung, with all its subsidiaries and manufacturing partners will definitely be taxing everyone. Sony/Sharp's fortunes may reverse as well (I highly doubt it will be significant for a full recovery). Just about every pico projector or small form factor display specialist company may have a play here.A note on LCoS is that it has some unique wavelength selectivity and phase modulation characteristics. It will find unique volume applications for sure, but it has to mature (for example, ferroelectric LCoS) in time for it to be viable for AR/VR.Then there are the sensor/A2Ds/ADCs semiconductor manufacturers at play - All the MEMS and optical sensors companies. Among them, the one with the best accuracy and precision may win. Google has a march over other companies with the Lumedyne acquisition, but not sure how mature their lines are.Semiconductor photonics companies like Philips or Maruwa or Osram or Sumitomo are going to benefit if they just try and play nice with the AR/VR companies instead of focusing on their energy efficiency marketing.Metrology software companies that provide enterprise and production support with standardized services for sensor algorithms and metrology and/or device calibration would be successful. Bosch is an example. There's several companies in the Illinois area as well. Prototyping silicon has reliability issues that may be solved with standardized outsourced testing services.There are opportunities for SFF computing and GPU companies to release plug-n-play modular hardware that ensure that consumer grade laptops/desktops are compatible with VR/AR systems. USB 3.x standards appear to be a great way to enable such technology. It is likely that there will be consortium based standards created to enable such channels, and such standards will be used to levy a tax. (See note below for why entrenched desktop GPU players may not have much of a lead, but mobility/low power IP core licensing has strong opportunities.)New actuator technology, like bone conduction, needs to be examined a little closer. There are things that can be done here with multi-actuator setups that don't seem to have been explored yet. Companies that already specialize in BC speakers (Dayton for example) should see a steady flow of custom design RFQs. Somebody's going to eventually go towards a powerglove type solution (again!) and incorporate gyroscopic actuators (spinning inertial platters) for haptic feedback. Those that can make them small will have considerable play.In addition to passive optics, some, more mature and larger, technology companies are considering active optics. The usual approach is to use polymer thin film composites of nematic LCs and others organic/inorganic birefringent materials to construct patterned active layers sandwiched between glass and TCO layers to construct waveguides. An example may be DO.Biometry companies and statistical human body model companies should also see better margins. The fashion market will dictate a few trends here, but I have no idea where to even begin with that.Biocompatible polymer companies should also see a surge in inquires depending on the weight and fit of the final design. Fitbit had issues. So I expect 3M or Eastman to pick up a piece of the pie, along with their traditional plastics lines. There could be a move to natural fibers as well but I have no idea of who the players here are.The demand for software developers skilled in sensor implementation will continue its upswing, and I suspect to the point where many of these jobs go offshore. So off-shore consultants need to figure out how to incorporate such skills in their pools. Initially, it won't be as straight-forward as learning a framework - You will need graphics, mobility GPU, physics and EE/DSP skills.Big players like Intel will keep trying to force their ideas on people, and they may succeed. Entries of companies like Apple into the fray will definitely tip the carts for every other company not a part of their supply chain. You will see smaller companies without strong tech stacks disappear or get acquired in firesales.The receptivity of the Chinese market is also another huge factor. If a small player wins there, then the companies in North America may not have a chance to adapt to the China-internal supply chain efficiencies. But that's a generic statement true for many segments. What will be interesting is if AR/VR gets forced into a novelty segment like the Pico projectors were. And what I do know is that they, China, are looking at a very different, bigger, picture and running in a completely different direction from the US or the Japanese/Koreans. It may pay to not be caught inside the North California feedback loop.A general point - the chicken or the egg problem - Big manufacturing doesn't do small prototyping. But Small manufacturing requires cash upfront, and don't extend deep credit to small prototyping customers. Small companies that have big ideas but little to no credit generally find it difficult to execute on the actual vision - Some specs and tolerances are just proportional to money. Some manufacturing partners that look at the big picture and take on the risks may find themselves getting lucky. Hon Hai is doing just this. The ones based in the US or Korea seem to be only providing lip service.Physical FMCE retail stores like BestBuy, Apple/Microsoft Stores that showcase AR/VR and personalized wearables experiences are going to be critical as well. There is simply no other or better way to market such highly personal experiences. So capturing/partnering with distribution channels is critical if you don't have your own. This is also good for physical retail, because foot traffic+ecosystem sales. I expect BestBuy will do really well.2. Supply chain dependencies.GaN GaAs wafer and quality, availability and scaling to meet consumer demand.Efficient illumination sources (LEDs, laser diodes). For non-emissive sources, the light has to be generated by something. Most people are unaware of how critical illumination sources and the semiconductor companies that make them are. There are maybe two, at most three, orgs that can actually provide sources at scale.Precision diamond turning equipment for lens and mold manufacture.Micro forming/micro deep drawing/micro injection molding and stamping tech IP. Apple is the only Co with an internal specialist group on this. Most people don't understand this end of the business, but it's critical because miniaturization/compactification requires heavily non-ISO parts for fasteners (screws, for example: low strength steel Y-wing screws, w/ 0.2mm threads on Apple Watch), latches and rocker/slider/MOM switch mechanisms and custom connectors (eg. hidden 6 pin port on Apple Watch).Foundries with stable MEMS, FinFET, <20nm design rules and processing. Anyone can flat out say GlobalFoundries, TSMC and Samsung Semi are going to make money. Their recent investments (>$50B aggregate, 2012-now) reveal the same.Sensor design IP accessibility - The IP needs to not just be good, but defensible against frivolous claims. Verified/reliability approved designs are immensely valuable.Sensor fusion/SLAM algo IP accessibility. Many AR/VR hw orgs start building MVPs using barely functional, first-principles algorithms. The bare functions from open-source libs are okay, but don't scale well. It's easier to pay for specialist IP than develop self-calib. routines or wait for years to get SLAM working correctly. Example - Bradski was one of Abovitz's first hires. There's about 100 people in the world with the know-how to do this well. Chances are, they are not in your team.3D packaging IP and industrial equipment, for System-in-Package design. Very few orgs specialize in 3D packaging design/simulation.EMC/EMI and EDA simulation tooling. Critical because of body-proximity and tight packaging with components physically on top of each other.Wrap around ultra-high-density flex, materials and manufacturers. For flexes to be 'wrappable' or bendable, the traces should not fail when bent but be thin enough to bend easily. These are antagonistic design criteria and difficult to manufacture reliably at scale.Small form factor interconnect manufacturers. Another thankless, behind-the-scenes role, but this is the reason devices become thinner or smaller, not Moore's law. Smaller semi chips do help, but reliability in IC packaging, molds and interconnect technology is what allows the signals to transfer. Very few specialist companies can handle high volumes custom interconnects.Bandwidth/fiber/telecom companies. Also multi hop/mesh network h/w f/w developer companies. IoT needs a display and AR/VR setups along with smartphones may be it. There's a huge effort behind the scenes for control of this space which is seen as high growth. Entrenched players are leading this effort. Not all are traditional networking companies.Human factors analysis - Human bodies have statistical dimension distributions - Dreyfuss' text Measure of man and woman doesn't give you digitally usable data. Companies specialize in collecting statistical body data and rendering them to useful forms (including CAD, finite elements and point clouds). Then there are gait, form, structure, posture datasets that designers love to have. This industry is bound to benefit from all kinds of wearables.Fab cycles for fabless, smaller players.Polymer orgs and their consultants. Polymers and specific recipes are used in everything from mechanical to optics to packaging to flexes to fibers. Dupont's Kapton XC used on Apple's stealth flexes is an example.Industrial optic metrology/lightfield homogeneity/illumination PSF measurement tool makers. Cohu/Delta design for line equipment, Nikon, Hexagon, a bunch of European companies.Regulatory policies on eye fatigue/illumination exposure. There will be guidelines issued and there will be some posturing. See note on health risks below.Physical retail stores for distribution channelsNote on regulations and health risks (cc. Colin Jensen)Head-Mounted devices will fall under both the FCC and the FDA mandates (21CFR1000.15). Device-makers will try to force an FCC-only oversight by disavowing any medical applications - but the FD&C sec. 531 makes it clear that such devices should require FDA oversight. However in the end this may be a matter of legal clarification that really cannot be predicted.FCC regulations are murky and I am unsure of how the agency arrives at its conclusions. FDA operations are clearer, or maybe I am just more familiar, so I focus on that.Broadly - The first gen products will have standard disclaimers and warningsremoving/reducing liability (<30mins of usage, >16 yrs of age etc). The FDA as a matter of practice does not 'go after' nascent businesses unless it is egregious, or it receives complaints or reports of concern from the practicing medical community. It will first study the physiological effects under expert oversight, then issue guidances, finally create regulations and standards that equipment manufacturers would have to follow. This takes time and industry comments are always sought and taken into consideration.Unless deemed absolutely dangerous, commerce may be allowed until guidances areissued. If the expert panel finds significant chances for injury to occur, it will select a classification for such near-eye devices (Class I being safest, III being worst). Thenear eye industry will fight to get their products classified Class I at worst.The LCD/DLP based devices may be declared class I depending upon the raw lumens they output among other factors. Many AR devices may actually not require classification at all, specially if they are battery operated - Because limited power restricts the illumination output. However this is not true for tethered devices such as those from Oculus. They may try to increase their display brightness which may increase health risks (see below).Google/Magic Leap, if they use laser diodes as sources, may get a Class II (mandatory 501k filing since there are no equivalent products that shine lasers into your eyes for long periods of time). The increased regulatory costs may be passed down to the customers. Regulation may also force such tethered/high lumen near eye devices to be sold under medical or occupational guidance etc.Health risks - Broad categories are(A) Simulator sickness(B) Repeated/prolonged exposure.Simulator sickness is an ongoing discussion - Some are of the opinion that initial sickness can be overcome through sitting+acclimatizing over time. They assume that this initial barrier won't restrict adoption. I disagree, but won't dwell on this.Prolonged exposure - The physics of head mounted systems, with their brightness, near eye lensing, moderate temperature, pressure and electromagnetic exposure dosage and the straps+weight of the system should definitely require physiological/ ophthalmological oversight. For example, some wrap around strap-based designs do increase mechanical pressure or physical stress on superficial temporal, occipital and angular/facial veins/arteries. The physics (inertial motion, eccentric mass distribution) of head mounted gears will lead to conditions similar or worse than what commonly cause 'helmet whiplash' or gorilla arms for controllers. Also RSI is to be expected with unsupported necks, arms.Note these are the obvious issues - there will be others such as those pertaining to blink reflex, ocular hypertension, trauma on general orbital systems and many others. I don't have expertise to assess these.The risks will have to be evaluated against the benefits. Many concerns may be addressable through appropriate engineering and quality control. I only hope that systems designers follow a 'do no harm' or ALARP principle and consider such issues in their system design.Is medical regulation a barrier to entry? Depends. Not for most players, but certainly for the homebrewers. The onus is on the very first devices to show that they are not dangerous. After that every other manufacturer only needs to claim equivalence with the first device. This may still deter some players not familiar with policies and hinder investments in such players.NB - A class III classification is highly unlikely, but could prove very dangerous to the AR/VR industry.However I expect there will be no regulatory roadblocks until at least 2018 or later.Note on content channel and app-store lock-in'sAll major players (Microsoft/Hololens/Windows Store, Google / Magic Leap / Play store...) will have their own content channels. Hard to expect Steam or any non-affiliated channels - those not affiliated with either a major h/w or OS maker - to succeed; Example: Leap Motion, Amazon's app store didn't do too well. Another example, developer's funds - like those created by Highland for Leap motion don't work out to create a natural ecosystem when competing against integrated app stores.A generic, system-independent app store faces huge issues, as was initially demonstrated by the fragmentation in Google's play store where many apps didn't work the same or at all on different versions of android phones. So platforms like OpenVR face big obstacles. First gen h/w products with heavy subsystem and performance fragmentation at the h/w and OS level cannot simply be virtualized away in software at the outset imo.This point becomes pronounced specially if your display tech just uses different physics to get the information into the users eyes (Magic Leap - lightfield, Oculus et al - stereoscopy displays, Hololens TBD) or uses different types of sensors and recognition algorithms with different levels of resolution/accuracy to capture input. If your immersion/ interaction stacks are different, you would introduce huge inefficiencies/bloat in trying to virtualize all possible hardware combinations into a single API. Some UI choices may stop making sense on different stacks/controller choices.On why desktop GPU companies don't have a guaranteed taxBecause they are not common to all AR/VR graphics pipelines.Gaming based VR rigs may need them, but baseline h/w margins are low and the larger market is not receptive to premium performance h/w.Market drivers may be low power/small footprint/mobility and cost, once baseline performance is guaranteed.Many h/w companies would prefer to develop their own low power SoCs eventually, specialized to their own tech stack rather than pay premium for generic systems (see point about integrated verticals; Example-Apple and PowerVR/Imagination Tech IP, Cadence/Tensilica, Synopsys..). Traditional GPUs are good for generic development with hardware abstracted away, but performance optimization starts at architectural design. [I am a biased rotten h/w guy, you can tell]You can mix and match IPs and foundries. Sub-20nm FinFETs are now stable and a big deal for next gen mobility SoCs. So depending upon an org's commitment+strategy, it's not that difficult to develop application-optimized graphics engines at the system design level while still getting next gen fab. Reducing generational dependencies on external architectures is generally a good idea and gives you better control in targeting experiences.I think the position that 'desktop-class GPUs are mandatory' comes from assuming that VR/AR==Performance gaming and Oculus; Not casual gaming, not enterprise, not productivity. I doubt this.From a display h/w perspective - Pixels need sufficient physical transition time to be able to switch really fast. Only microLEDs have the kind of refresh rates mentioned in Binstock's article in engineering stable designs. And liquid crystals don't switch that fast, because hysteresis and its a mechanical torsion we are talking about. Ferroelectric LCs do, but are not out of the labs yet. DLPs can, but they have ringing and pull-in stiction issues that they have to spend more time in fixing otherwise you get jitter and color breakage (smaller the pixels == worse that problem). That's one of the reasons why solid state photonics is such a huge deal. LC based displays may work for AR with selective region based decompression/refresh, but AR doesn't commonly use LCDs. Please note the disclosure below.So current gen displays don't support insane, >90Hz-peak refreshes - This implies that there's no need for 120FPS GPU support.Magic Leap's LBS tech may also surpass the refresh rates (90Hz) easily, and they would definitely need a custom SoC to fit in their mobility design.Note on '150Hz' FPS that eyes can detect - That's a number that apparently a WoW gaming forum spewed out and I believe is inaccurate. Eye's don't scan in consistent frames (see Andrew Watson's work at NASA on physiology of display perception, or Holmqvist et al, Eye tracking-comprehensive guide to methods and measure 2015). Physiologically, if we had to converge to a number it would be much higher and we risk forcing multiple concepts of biological photon sensitivity, contrast response and vision/motion perception onto a not very useful generic concept of hardware frame rates.The actual physiological processes of vision perception are very relevant to near-eye display people, mind you. But that's another discussion in another place.I realize I focused the most unglamorous aspects of product design and lifecycles here, but hey, this is what it takes. Some poor sod has to sit down and figure these things out so we can dream our rainbow dreams in powerpoints, clean APIs and rendered CAD images.I deliberately avoid taking specific names, except GlobalFoudries/TSMC/Samsung Semi, because lock-in depends on who's willing to take the biggest bet on this market. Many orgs that have a play here haven't shown deep commitment yet, except maybe Microsoft and Google.Disclosure: I design, build, analyze Micro-electromechanical systems and optoelectronic hardware for display/imaging technology used in ultra-miniature (pico) light engines and sensor technology for human-computer interfaces at Ostendo, a display technology company in SoCal that also makes near eye products. I invented a sensor+algorithm stack called deepSense with applications in AR/VR/Biomed/robotics and real life. It's just the greatest thing since Nyan cat. I am fully of that opinion.

People Trust Us

CocoDoc is super easy to use! Thankful to have found an easy way to send contracts!

Justin Miller