Three-Point Likert Scales Are Good Enough: Fill & Download for Free

GET FORM

Download the form

How to Edit Your Three-Point Likert Scales Are Good Enough Online Easily and Quickly

Follow the step-by-step guide to get your Three-Point Likert Scales Are Good Enough edited with accuracy and agility:

  • Hit the Get Form button on this page.
  • You will go to our PDF editor.
  • Make some changes to your document, like adding checkmark, erasing, and other tools in the top toolbar.
  • Hit the Download button and download your all-set document into you local computer.
Get Form

Download the form

We Are Proud of Letting You Edit Three-Point Likert Scales Are Good Enough Like Using Magics

Find the Benefit of Our Best PDF Editor for Three-Point Likert Scales Are Good Enough

Get Form

Download the form

How to Edit Your Three-Point Likert Scales Are Good Enough Online

If you need to sign a document, you may need to add text, give the date, and do other editing. CocoDoc makes it very easy to edit your form with just a few clicks. Let's see how to finish your work quickly.

  • Hit the Get Form button on this page.
  • You will go to our online PDF editor web app.
  • When the editor appears, click the tool icon in the top toolbar to edit your form, like adding text box and crossing.
  • To add date, click the Date icon, hold and drag the generated date to the target place.
  • Change the default date by changing the default to another date in the box.
  • Click OK to save your edits and click the Download button for sending a copy.

How to Edit Text for Your Three-Point Likert Scales Are Good Enough with Adobe DC on Windows

Adobe DC on Windows is a useful tool to edit your file on a PC. This is especially useful when you like doing work about file edit offline. So, let'get started.

  • Click the Adobe DC app on Windows.
  • Find and click the Edit PDF tool.
  • Click the Select a File button and select a file from you computer.
  • Click a text box to give a slight change the text font, size, and other formats.
  • Select File > Save or File > Save As to confirm the edit to your Three-Point Likert Scales Are Good Enough.

How to Edit Your Three-Point Likert Scales Are Good Enough With Adobe Dc on Mac

  • Select a file on you computer and Open it with the Adobe DC for Mac.
  • Navigate to and click Edit PDF from the right position.
  • Edit your form as needed by selecting the tool from the top toolbar.
  • Click the Fill & Sign tool and select the Sign icon in the top toolbar to customize your signature in different ways.
  • Select File > Save to save the changed file.

How to Edit your Three-Point Likert Scales Are Good Enough from G Suite with CocoDoc

Like using G Suite for your work to complete a form? You can make changes to you form in Google Drive with CocoDoc, so you can fill out your PDF without Leaving The Platform.

  • Go to Google Workspace Marketplace, search and install CocoDoc for Google Drive add-on.
  • Go to the Drive, find and right click the form and select Open With.
  • Select the CocoDoc PDF option, and allow your Google account to integrate into CocoDoc in the popup windows.
  • Choose the PDF Editor option to open the CocoDoc PDF editor.
  • Click the tool in the top toolbar to edit your Three-Point Likert Scales Are Good Enough on the field to be filled, like signing and adding text.
  • Click the Download button to save your form.

PDF Editor FAQ

What is a controversial research paper in linguistics?

There are several controversial papers written by Dan Everett and his son Caleb Everett. The one that resulted in the most heated debate was Everett (2005), where Dan Everett essentially argued that a language he documented, Pirahã, disproved Universal Grammar because Pirahã was constrained by culture. Everett claims the Pirahã culture prohibited discussing anything that was not immediately present, which causes the language to lack recursion, tense, and other things.Everett made all sorts of highly unusual claims and choices in the paper. For example, he claimed based on the following data, that Pirahã has no quantifiers:Eventually Everett began to gloss those quantificational things as related to size, after arguing they weren't quantifiers. His glosses became frankly, laughable:On this topic, Anna Wierzbicka (at the time she was a lecturer at the Australian National University) objected saying:"in using such glosses, Everett exoticizes the language rather than identifying its genuinely distinctive features. To say that ti ’ogi means, literally, 'my bigness' (rather than 'we') is like saying that in English to understand means, literally, 'to stand under.' To deny that hi’ogi means 'all' is to make a similar mistake."Several dozen linguists and other scientists left comments on the paper pointing out the inconsistencies. Stephen Levinson from the Max-Planck-Institut für Psycholinguistikhad this to say for instance:"Blatant inconsistencies likewise do nothing to reassure the reader. For example, we are told that the Pirahã are monolingual, but we find that “often this or that Pirahã informant would tell me (in Portuguese) that. . .” and that “Pirahã have long intermarried with outsiders,” suggesting sustained bilingualism. Elsewhere it is stated that there are bilingual informants, although their Portuguese is poor.Having made the Pirahã sound like the mindless bearers of an almost subhumanly simple culture, Everett ends with a paean to “this beautiful language and culture” with “so much to teach us.” As one of the few spokespersons for a small, unempowered group, he surely has some obligation to have presented a more balanced picture throughout."Nevins, Pesetsky & Rodrigues (2009) wrote a paper arguing against Everett’s where they showed evidence of Pirahã myths and fiction from other papers.For some reason, Dan Everett has become very popular in the Atheist and Atheism Plus crowd, as well as among Conlangers, and lots of non-scientists are his fans. Then the following takes Everett’s claims seriously and asks questions on Quora like:Why don't the Pirahãs believe in God but they believe in spirits?Daniel Everett cites Pirahã grammar as one example that contradicts Chomsky’s universal grammar (UL) and thus argues that UL is not entirely accurate in principle. Nevertheless, Chomskyans refuse to accept Everett’s view. Why?Ok, off this one controversial paper, most of the controversial papers published in the last 20 years have been in Phonology and Historical Linguistics. I’ll get to Caleb Everett’s paper in Phonology.Syntax:In the 90s, many papers initially arguing for Minimalist Syntax were, at the time, quite controversial in linguistics and they still are very controversial in psychology and evolutionary biology, and fields of linguistics that intersect with those fields.Chomsky (1993) essentially threw the leading theory: Principles and Parameters, which he himself had partly started, out the window. Rather explaining language structures with analogs to the brain and biology, through constraint ranking, Chomsky (1993) argued we should examine language as a perfectly optimal system and move from there.The eventual result of this was that language variation, word order, and basically almost everything to do with language, was put on the back burner—no longer the object of study for Syntax. Word order? Since it varies, it cannot be reflective of the LAD (Language Acquisition Device), there must be some area of phonology that handles word order.It essentially disembodied big-L Language even further from the brain, and from observable data.Soon afterwards, psychologists began publishing papers attacking this line of inquiry, characterising it (accurately in my opinion), as no longer trying to actually study language acquisition, how Language interfaces with the brain, and instead just being an abstract theory about an already abstract theory.However, many syntacticians sort of went along with it, and we have now had a few decades of Minimalist Program dominated syntax.In the early 2000s though, Jackendoff first patiently in Jackendoff (2003) tried centering Minimalism around being a theory about how language is acquired, and later much more as an attack in Pinker & Jackendoff (2005) and Jackendoff (2010).In the meantime, Minimalism became even more Minimalist. By 2005, Chomsky had explicitly stated that all he was concerned with was a theory of Recursion.Jackendoff & Pinker replied, quite explicitly, from the Abstract:“…in light of recent suggestions by Hauser, Chomsky, and Fitch that the only such aspect is syntactic recursion, the rest of language being either specific to humans but not to language (e.g. words and concepts) or not specific to humans (e.g. speech perception). We find the hypothesis problematic. It ignores the many aspects of grammar that are not recursive, such as phonology, morphology, case, agreement, and many properties of words. It is inconsistent with the anatomy and neural control of the human vocal tract. And it is weakened by experiments suggesting that speech perception cannot be reduced to primate audition, that word learning cannot be reduced to fact learning, and that at least one gene involved in speech and language was evolutionarily selected in the human lineage but is not specific to recursion. The recursion-only claim, we suggest, is motivated by Chomsky's recent approach to syntax, the Minimalist Program, which de-emphasizes the same aspects of language. The approach, however, is sufficiently problematic that it cannot be used to support claims about evolution. We contest related arguments that language is not an adaptation, namely that it is “perfect,” non-redundant, unusable in any partial form, and badly designed for communication. The hypothesis that language is a complex adaptation for communication which evolved piecemeal avoids all these problems.”Since then it has been no punches held back, but syntacticians have been largely ignoring non-linguists, so the punches are not actually landing anywhere in my opinion.Historical LinguisticsThere have really been two different vectors producing controversial papers in Historical Linguistics. One is people casting doubt on widely-accepted families, and the other is computers becoming more advanced and linguists and computer scientists deciding to throw a bunch of data into a computer, then they publish whatever the computer spits out.Many of the papers casting doubt on families are almost like Flat Earther stuff—thoroughly unscientific, often being written by historians in countries that either want to be the centre of the world in a language family (e.g., Greece, Hungary, Turkey), or by people trying to argue that the field of Historical Linguistics itself is rooted in nothing but racism, and that there are not really connections between the languages of northern India that descended from Sanskrit, Persian, and many of languages of Europe.On the other side of things, Maria Gimbutas, who came up with the widely-supported Kurgan hypothesis, started spinning the Kurgan Hypothesis into a theory where the warlike Proto-Indo-Europeans came out of the steppes and rampaged everything.Eisler (1988) then spun it into a very wild hypothesis that the original Indo-Europeans created sexism, racism, and war, and that they preyed on the egalitarian pre-Indo-European inhabitants of Europe, the Middle East, and India. Before then Eisler (1988) hypothesised that societies existed in a state of ‘Gylany’ where everyone was equal. Eisler (1988) essentially blamed the Kurgans (and Proto-Indo-European speakers) for bringing about an age of male domination, social hierarchy, and conquest.Some papers have been published such as Brouckaert et al. (2012) and Brouckeart, Bowern, & Atkinson (2018) which try to model language families by feeding a bunch of data to a computer, and interpreting the data through the lens of Bayesian Phylogeography.The model itself I find to be a problem when explaining languages, and part of this is because of its origins. Bayesian Phylogeography was originally designed to map the spread of disease. Diseases spread over a matter of days or weeks, whereas languages spread very slowly over years or even thousands of years.What these assumptions look like in the model, when applied to language, are essentially as follows:If a language forks into two languages, one remains in the same location and the other language diffuses out of it.That this language family they are analysing is in fact, one unit (a family).Languages that are geographically closer to each other are more closely related to each other.Terra Nullius—No language in this family ever replaces a language in that same family. The languages only expand to land occupied by people who speak languages of different families.In viruses this makes sense… If a group of people came down with a bout of Ebola, under normal circumstances, Ebola would not spread to the same people who caught it the first time around. They would have built an immunity.In language however, this seems to manifest itself as Terra Nullius.Phonology… last but not leastFor some reason, many linguists and anthropologists have tried arguing all sorts of things in nature cause things in phonology. Typically, these papers take some data showing a correlation, and go through serious mental gymnastics arguing causation.None of these claims I have seen so far have any theory-internal reason for us to believe them: there is nothing in the theories of phonology, acquisition of phonemes, human anatomy and physiology, or any other field to make us think that being in thin air causes ejective fricatives to develop, that living in an extra moist environment causes tones to develop, or that making nasal vowels is caused by cold climate or by being sexually promiscuous. Yet, these authors all have different ideas.The most famous one is Everett (2013), which argued that one factor leading to languages evolving to have ejective stops is high altitude:"Since ejectives are made via the compression of air supra-glottally, we initially speculated that their articulation might be facilitated at higher elevations, since atmospheric air pressure (and therefore the air pressure inside one’s mouth and lungs) is reduced at such elevations. The grounding for our hypothesis follows naturally from some basic principles of air pressurization.The pressure differential created by a velar ejective sound, the most common kind of ejective, can be schematically described by subtracting the air pressure in the pharyngeal cavity prior to constriction (P1) from the pressure after constriction (P2), i.e. P2-P1. P2 can be found via Boyle’s law: P2 = (V1×P1)/V2. In other words, the air pressure differential (P2-P1) is created by compressing the pharyngeal air cavity (V1) at a given atmospheric pressure (P1), by reducing the cavity to a smaller volume (V2)."What is most striking to me, is that this explanation doesn’t actually say why they would develop—only that producing an ejective is easier at altitude due to Boyles Law, or perhaps in the plane at 30,000 ft or while scaling Mt Everest [p’ojls’ lɑː]~[p’ojls’ lɔː].Oddly, many Conlangers read this paper and then started posting stuff on DeviantArt, Youtube, and Conlanger boards referencing it. I saw one member of the My Little Pony fandom create a Pegasus language, and gave the Pegasi (MLP plural of Pegasus) ejective stops because they fly high in the air.Another Caleb Everett paper that has gotten some attention in the Conlanger community is Everett, Blasí, & Roberts (2016) which claimed that humidity causes tone, or rather dryness makes it harder to hear tone.This paper was a lot better argued, and I find the claim a bit more palatable, and the way tone is lost in dry climates is actually explained logically:“Given the heightened articulatory effort and imprecision associated with phonation in desiccated contexts, we suggested that the clear avoidance of complex tonality in arid contexts is unlikely a matter of coincidence. Since fundamental frequency plays such a prominent role semantically in languages with complex tone, our conjectured causal relationship was, we think, both plausible and investigable via further experimental inquiry. After all, ease of articulation is well known to influence the typological distribution of certain sound patterns. Voiced velar plosives are less frequent than their alveolar counterparts at least in part, because it is more difficult to maintain the reduced supralaryngeal air pressure requisite for voicing when air is stopped at the velum rather than at the alveolar ridge.”Alright, so tone is lost in arid environments because it is harder to maintain the contrast, because it is actually harder to hear—if Everett et al can prove that great! A sample spread of languages with tone and languages without tone slapped on a map though, is not enough to convince me.That was the least controversial of these papers. The next two are beyond strange.Fought, Munroe, Fought, & Good (2004) decided that since vowels like /a/ are more sonorous than vowels like /i/, or than consonant clusters, that languages in warm climates must have more /a/ and more open syllables, fewer nasal vowel, and an overall higher sonority score. Why you ask? Well, here’s the explanation in their own words:As distance between speaker and hearer is increased, high sonority levels, more audible at a distance, thus assume heightened importance in carrying intelligible messages. Speech sounds differ in their sonority and hence in their carrying power in ways that are well understood. Societies differ in their daily and seasonal cycles of social activities, and in the degree to which the activities are accompanied or regulated by speech, which is sometimes carried on at close range and otherwise at greater distances and in varying physical environments. We believe that this diversity of resources and variation in patterns of use, viewed as a process operating at all times in societies and in the phonetic systems of their languages, may help to account for differential mean sonority levels in the languages of the world. To repeat, we posit that distal oral communication occurs more often in warm/hot than in cold climates.Apparently, people in warmer climates yell to each other more often, therefore, they use open syllables and fewer consonant clusters. They did not exactly explain why adequately, and there is research to show many Northern European cultures in colder climates have wider personal spaces than Mediterranean cultures in warmer parts of Europe, which would predict the exact opposite.This paper did not show a scatterplot of their data based on cold months, but another paper advocating this same type of research did.Three years later, Ember & Ember (2007) took Fought et al (2004)’s claim, and modified it. They also asked Fought et al for their data. Robert Munroe gave it to them, and Ember & Ember turned it into a scatterplot. It was the worst scatterplot I have ever seen published in my life:The fact of the matter is, Fought et al (2004) quite simply oversampled the languages with zero cold months, resulting in them having a distribution ranging from both high to low sonority. Per the Central Limit Theorem, as the sample size increases, the data gets closer to a normal distribution, so we would expect that if they sample more of the languages with more cold months, we would get a similar spread as the languages with zero cold months.I decided to plug their data into JASP, and I discovered that their statistically significant P value is an artefact of oversampling 0 cold month languages.As is, there was a statistically significant correlation.When I filtered all of the 0 months of cold weather languages (and corrected a few of their datapoints), there was no statistically significant correlation:Now, back to Ember & Ember (2007). It was definitely the worst paper in a peer reviewed journal I have ever read. I wrote a viral answer on that here once Brian Collins's answer to What are some of the worst academic papers ever published?Ember & Ember took Fought et al., (2004), and basically claimed that the reason why their cold month to sonority causal relationship is not perfect, is because nasal vowels are caused by sexual promiscuity.Are there any theory-internal reasons for us to think so? No. But Ember & Ember had other ideas:Lomax (1968) found that premarital sexual restrictiveness predicted two aspects of song style—less vocal width and greater nasality. Lomax describes vocal width as follows: It ranges from singing “with a very pinched, narrow, squeezed voice to the very wide and open-throated singing tone of Swiss yodellers” (Lomax 1968:71). A narrow voice is produced by “raising the glottis, raising the tongue and pulling it back, and tensing the muscles in the throat” (Lomax 1968:71). Nasality occurs when the sound is forced through the nose (1968:72–73). The highest ratings on nasality occurs when nasality is heard throughout a song, regardless of the actual songs sung. In Lomax’s view, the singing voice reflects tension about sexuality (Lomax 1968:194). Could opening the mouth wide to make sonorous sounds be partially explainable as an effect of sexual permissiveness? We explore the effects of eco niche and sexual permissiveness in the section on results below.Everything about this is bizarre. They get the definition of a nasal vowel not from a phonologist, as you would expect, but from Lomax, a Folklorist. Some people have suggested it may have been a prank paper, but I do not think so. On one of the author’s obituaries, it was cited as an example of his interdisciplinary contributions to Anthropology.Ember & Ember (2007) operationalised promiscuity as having extra-marital affairs, and then arranged languages in bar and whiskers plots by frequency of extramarital sex…Clearly, they should have taken a statistics class before trying to do research involving statistics. A terrible scatterplot showing a random spread based on number of months? A scatterplot where the X axis is not a continuous variable and a bar and whiskers plots for frequency of extramarital sex? Arranging frequency of extra marital sex like a Likert Scale???On top of all that, they did a multiple regression when the X-variables were ordinal, instead of a non-parametric test.For SPSS to even let them run multiple regression with ordinal variables, they must not have defined their variables properly.Odds are if you are citing any of these except Brouckaert et al (2012; 2018), Chomsky (1993), Jackendoff (2003), Jackendoff & Pinker (2010), Gimbutas (1982), you are writing something really crazy…References:Bouckaert, R., Lemey, P., Dunn, M., Greenhill, S. J., Alekseyenko, A. V., Drummond, A. J., ... & Atkinson, Q. D. (2012). Mapping the origins and expansion of the Indo-European language family. Science, 337(6097), 957-960.Bouckaert, R. R., Bowern, C., & Atkinson, Q. D. (2018). The origin and expansion of Pama–Nyungan languages across Australia. Nature ecology & evolution, 1.Chomsky, N. (1993). A minimalist program for linguistic theory. In Hale, K., L. and S. J., Keyser, eds. The view from Building 20: Essays in linguistics in honor of Sylvain Bromberger. Cambridge, Massachusetts: MIT Press. 1–52Eisler, R. T. (1988). The chalice and the blade: Our history, our future. San Francisco: Harper & Row.Ember, C. R., & Ember, M. (2007). Climate, econiche, and sexuality: Influences on sonority in language. American Anthropologist, 109(1), 180-185.Everett, C. (2013). Evidence for direct geographic influences on linguistic sounds: The case of ejectives. PloS one, 8(6), e65275.Everett, C., Blasí, D. E., & Roberts, S. G. (2016). Language evolution and climate: the case of desiccation and tone. Journal of Language Evolution, 1(1), 33-46.Everett, D., & Everett, K. (1984). On the relevance of syllable onsets to stress placement. Linguistic Inquiry, 705-711.Everett, D. (2005). Cultural constraints on grammar and cognition in Pirahã. Current anthropology, 46(4), 621-646.Fought, J. G., Munroe, R. L., Fought, C. R., & Good, E. M. (2004). Sonority and climate in a world sample of languages: Findings and prospects. Cross-cultural research, 38(1), 27-51.Gimbutas, M. (1982). Old Europe in the Fifth Millennium B.C.: the European Situation on the Arrival of Indo-Europeans. In E., Polomé, the Indo-Europeans in the Fourth and Third Millenia. (Vol 14).Jackendoff, R. (2003). Précis of foundations of language: brain, meaning, grammar, evolution. Behavioral and Brain Sciences, 26(6), 651-665.Jackendoff, R. (2010). Your theory of language evolution depends on your theory of language. The evolution of human language: Biolinguistic perspectives, 63-72.Nevins, A., Pesetsky, D., & Rodrigues, C. (2009). Pirahã exceptionality: A reassessment. Language, 355-404.Pinker, S., & Jackendoff, R. (2005). The faculty of language: what's special about it?. Cognition, 95(2), 201-236.Lemey, P., Rambaut, A., Drummond, A. J., & Suchard, M. A. (2009). Bayesian phylogeography finds its roots. PLoS computational biology, 5(9), e1000520.

What is Alexa Answers and what sociological and anthropological insights can be drawn by the questions?

What is Alexa Answers and what sociological and anthropological insights can be drawn by the questions?I have been conducting a statistical study of Alexa Answers [1] and what I found is quite astonishing. This results have a robustly impacts understanding of how people use Voice First [2] devices like Alexa from an anthropological, sociological and business perspective.Specimen of Alexa Answers Control Panel.Empirical Research Study On Alexa AnswersI was commissioned to conduct this research by a notable and very astute Sand Hill Road Venture Capital group. My report was a—mouthful with over 600 pages including some relatively unknown and somewhat obscure patent references. I have asked and was granted permission to post some of the insights as an answer to this Quora Question and have deep gratitude to them for allowing me to share a study that they paid very generously for. They agreed with me that there is an undiscovered continent ahead with how people are using Voice First devices at scale to not only surface hundreds of new business plans, based on the questions received by Alexa, but also a window into how people are using Alexa. We also have a window in to the human factors and phycology of interacting with a device that is conducting a limited anthropomorphic conversation in their home.I come to this subject from an interesting and somewhat biased perspective. I have been a top writer at Quora [3] for about 8 years and my work there was published at HuffPost, Slate, Forbes, Apple News, Gizmodo, Newsweek, Inc, Daily Mail, and Business Insider. Thus I not only know the power and utility of Question and Answer sites (Q & A), I also feel rather strongly Quora has continued to not only define the category but also continues to innovate at a rapid pace.This is a concatenation of a report that is ~600 pages. The report had a number of other goals other then to present how Alex Answers works. As with most of my commissioned reports I identify typically multi-billion dollar startup opportunities and true Clayton Christensen disruption, if this is indicated. I have isolated the elements related to Alexa Answers and Q & A Sites in general and how they are vitally important to the next 50 years in computer I call the Voice First revolution [4].History of Q and ATo understand how Alexa Answers in the context of current Q & A systems we will need to digress to surface fundamentally important aspects of successful systems. This is vitally important if the answering community survives and thrives. As we will explore, this is not a certainty.The Straight DopeThe first elements of the modern Q & A system was decidedly not technical but very popular, The Straight Dope newspaper column was first published in 1973 and syndicated around the US. The popularity was partly based on the obscurity of questions asked snd the detailed and creative way they were answered. In many cases the average person cold have looked up the question in a home encyclopedia or local library. Yet there was the element of how columnist Cecil Adams and illustrator Slug Signorino captivated and educated their regular growing reader base. The Straight Dope was still published up to June, 2018 when it started what may be a permanent hiatus.The Usenet OracleIn The Usenet Oracle was a 1989 attempt at the first Internet based Q & A site using a humorous Socratic style, it became a very popular pre-web area to find answers to questions that when completed were called an Oracularity. This established a sort of passive-aggressive answering style that today dominates all parts of the Internet today. Usenet, because it was text based and primarily used by people that worked in the computer and software industry using a 300-1200 baud modem helped progress the terse passive-aggressive Socratic style. For many reasons the Oracle fell out of use. One reason was the rise of the World Wide Web. Another reason was the culture of the community just about chased out any answerer that did not adopt the terse passive-aggressive Socratic style.ProfNetBy 1996 the commercialization of Q & A sites started with the launch of PR Newswire’s site called ProfNet, an online community of communications professionals made to provide reporters access to expert sources. Although I did read The Straight Dope and attended to Usenet nightly, I see my real first Q & A experience with ProfNet, where I answered 100s of questions centered around technology, payments and small businesses primarily. The site was used by reporters, lawyers and a fascinating array of government institutions where to this day, I still do not know what they do and found it important not to ask. The site ended as other professional experts sites surfaced at a lower price point and aimed directly to either reporters or the legal field.Google Question And AnswersIn the intervening years there were many segmented and subject matter focused Q & A sites that spring up and shuttered. Until 2001 with the start of Google Question And Answers was started. This service involved Google staffers answering questions by e-mail for a flat fee $3.00. It was fully functional for about 24 hours, after which it was shut down, possibly due to excessive demand and the tough competition from Yahoo. Determined, Google Answers was launched in April 2002. A month later, a search feature was added so users could read question and answer pairs.By late November 2006 Google reported that it planned to permanently shut down the service. The reason “We considered many factors in reaching this difficult decision, and ultimately decided that the Answers community's limited size and other product considerations made it more effective for us to focus our efforts on other ways to help our users find information.” Keep this reason in mind as it turns out to be rather important. The culture of the community began to decay over time for many reasons.Answer.comAnswer.com and sister site WikiAnswers started as an idealab project (and Answer . Com domain name) in 1996. Initially questions and answers were displayed through a downloadable software product, today known as 1-Click Answers. The product was launched as a free product in 1999. Beginning in 2003 it was sold to users on a perpetual license base and later as an annual subscription. The service suffered from a lack of proficient and quality answers and less then a very low customer base. By 2010 Answer.com launched its alpha version of a Twitter-answering service nicknamed 'Hoopoe.' When tweeting a question to the site's official Twitter account: AnswersDotCom, an automatic reply is given with a snippet of the answer and a link to the full answer page on the Answers website. The service suffered from a lack of use.Yahoo AnswersBy 2005 Yahoo established the first sustainable model to the modern Q & A site to some degree. The concept was simple: Yahoo Answers was a community-driven Q & A site or “a knowledge market” that allows users to both submit questions to be answered and answer questions asked by other users. This gave questioners and answers an equal footing. It turns out there are some people that are great at posing questions, some are rather fascinating and other people are great at either first person knowledge on the subject matter of the question or excel at researching an effective answer. Yahoo Answers had an early limited use version answered by Yahoo employees and contract workers called Ask Yahoo, but this had very little use and exposure.Yahoo established a standard in rights ownership: though the service itself is free, the contents of the answers were owned by the respective users. Yahoo maintained a non-exclusive royalty-free worldwide right to publish the information.Yahoo Answers was a study of the good, bad and ugly of Q & A sites. Yahoo Answers allowed any questions that did not violate Yahoo Answers community guidelines. But the guidelines were an ever changing moving target. It also magnified what we users of the Usenet knew all too well, the terse passive-aggressive Socratic style and the hidden agenda question and hidden agenda answer. Categories like Politics and Religion & Spirituality along with the typical “debunker” clans began to cause internal strife with users and Yahoo. The anonymity of the answerers helps to contribute to this environment, users could choose to reveal their Yahoo Messenger ID on their Answers profile page but even this was not a direct identity as some users had 100s of IDs.Yahoo initially started a points system to “gameify” misuse of Yahoo Answers. This was handled by a user moderation system, where users report posts that are in breach of guidelines or the Terms of Service. Posts were removed if they receive sufficient weight of trusted reports primarily from users with a reliable reporting history. Deletion could be appealed: an unsuccessful appeal receives a 10-point penalty; a successful one reinstates the post and reduces the 'trust rating' of the reporter. If a user receives a large number of violations in a relatively short amount of time or a very serious violation, it can cause the abuser's account to be suspended. In extreme cases for a Terms of Service violation, the abuser's entire Yahoo ID will be suddenly deactivated without warning.The point system also was implanted to encourages users to answer as many questions as possibly, up to their daily limit. Once a user achieves and maintains a certain minimum number of such contributions, they may receive an orange "badge" under the name of their avatar, naming the user a Top Contributor. Users could lose this badge if they do not maintain their level of participation. Once a user becomes a "Top Contributor" in any category, the badge appears in all answers, questions, and comments by the user, regardless of category. A user can be a Top Contributor in a maximum of 3 categories.The points system is weighted to encourage users to answer questions and to limit spam questions. There are also levels with point thresholds, which give more site access. Points and levels have no real world value, cannot be traded, and serve only to indicate how active a user has been on the site. A notable downside to the points/level system is that it encourages people to answer questions even when they do not have a suitable answer to give to gain points. Users also receive ten points for contributing the "Best Answer" which is selected by the question's asker. The voting function, which allowed users to vote for the answer they considered best, was discontinued in April 2014.The Yahoo Answer, Questions were initially open to answers for only our days. However, the asker can choose to pick a best answer for the question after a minimum of one hour. Comments and answers could still be posted after this time. To ask a question, one has to have a Yahoo account with a positive score balance of five points or more.The culture of the community of Yahoo Answers decayed as many of the top answerers moved to Wikipedia and Quora latter on in search of a better community and culture.QuoraThis Q & A site Quora was established in June 21, 2010 and received widespread acclaim very early on, especially praised for its interface and for the quality of the answers written by its users, many of whom were recognized as domain experts with first person knowledge in their respective fields. Quora's user base exploded almost instantly and, by late December 2010, the site was seeing spikes of visitors five to ten times its usual load—so much that the website initially had difficulties handling the increased traffic. When the website first came online and for many years afterward, Quora refused to show ads because, as the company stated in 2016, "...ads can often be negative for user experience. Nobody likes banner ads, ads from shady companies, or ads that are irrelevant to their needsQuora established community moderators as well as moderators employed by the company. Quora requires users to register with the complete form of their real names rather than an Internet pseudonym, although verification of names is not required, false names can be reported by the community. This was done with the intent of adding credibility to answers and to control abuse. Users with a certain amount of activity on the website have the option to write their answers anonymously but not by default.Quora can be thought of as an ongoing science experiment. The site has gone through a great deal of evolution. Each iteration surfaces from the tests of prior iterations. Quora has attributes from many of the Q & A sites that came before. For example Quora has developed its own proprietary algorithm to rank answers, which works similarly to Google's PageRank that gives priority and weighting to an answer along with up/down votes and other engagements like reads and comments.Quora has been by far the most successful Q & A site thus far with millions of questions and answers and over 300 million month visitors. The site’s answers are usually featured at the top of most search results and ranks as the top 128th site in the world.Quora is really the only general interest Q & A site success story and there are many reasons for this. To help guide to the scope of this discussion it is not a chicken/egg situation, the site always needs a deep and wide contingent of writers. The game mechanics and reward systems are vitally important and has contributed greatly to the site’s success.There are many elements that have motivated the most prolific writers on Quora. One of the fundamental ones comes down to distribution, a good answer could potentially be seen by hundreds of thousands of people.Another element is the elastic structure to the answers. Quora is not a Wikipedia rote restatement of “facts” and this is the fundamental strength of the site. If you made it to this point in this posting, you are experiencing it: the wide spectrum of elastic responses to a question.Quora thrives because of what I have come to understand personally, as the inventor of a new type of medium and long form writing. The terse passive-aggressive Socratic style tends not to survive at Quora because of this. Whereas a vast majority of Q & A sites were infected with minimalist answers and easy to notice snide elitism, Quora has flourished with writers that for the greater part know and love the subject matter of the domains they write about first hand.One of the most powerful secrets about Quora is just about any question either has been posed or can be posed (with-in the site guidelines. This allows for domain experts to flourish. This may seem like a minor element, but it is one of the reasons I have been active on the site for over a decade. Quora cares about the sacculated first person knowledge not so much the question, but the knowledge.Storytelling is the fundamental tool humans have to learn. Quora’s success is based on creating an environment where first person knowledge of the domain expert can craft a story about the subjects they have spent their lives on. Although not everyone may be a “great” writer from a technical perspective, somehow the passion that person has becomes shared with the knowledge and information. This inevitably becomes longer form writing.The culture and community at Quora is quite unlike any Q & A site I have participated in. There was a camaraderie from the very early days on to the most current era. There are many elements that have built the core group that carry this ethos forward, one is the comments section where users could interact with the answerer and ask more high resolution details about the question as well as to offer commentary and debate. Although not perfect, the real name elements of Quora has maintained a civil commentary and debate element.The Basis For A Successful Q & A ProgramWe have established a number of criteria for successful and less than successful Q & A sites. This is very important for the longevity of Alexa Answers as if they cann’t sustain consistent good answerers, the system would meet the same fate as other Q & A sites and fall into fiefdoms of despair.Quora and SiriIn April, 2010 a few days after Apple acquired Siri, I began to write about how vitally important site like Quora will become to the Voice First revolution. I wrote “Is Quora Important to Siri” [5} as an example of the opportunity.“Quora As A Best Source For SiriWe are at the precipice of a major shift on how we interact with our devices and the information that is in the cloud. In fact perhaps the entire electronic world controlled by these devices will see this shift.Siri is the first real step in this direction of the 4th user interface, not replacing the Mechanical User Interfaces (the keyboard, mouse and gestures) but enhancing them and perhaps creating an entirely new use case for instant knowledge. The model we had up until this point may in the near future seem somewhat clumsy and dated when used to get at simple end point data. It is perhaps the tasks that all of us have become accustomed to that will be the most stark noticeable difference we will be confronted with. To put it simply, most of us find it is easier to ask a question with our voice, then to go through a Mechanical process. In the old Mechanical User Interfaces model there are the obvious steps and the steps that are less obvious,”—Brian Roemmele Quora, 2010I also answered a recepicol answer Is “Siri important to Quora?” [6]. In 2011 I said:“Siri Will Need To Access The Entire InternetThis type of system is an enhancement to the Mechanical User Interfaces (Keyboard, Mouse, Gestures). But Siri is also far more then this, from a Meta view it is a system that will understand your words, extract intentent, create a plan to access the Domain APIs that will generate the information to either produce an answer or complete a task”—Brian Roemmele Quora, 2011Alexa Answers ResearchIn December 2018 Amazon launched the Q&A platform into beta with the goal of improving Alexa’s ability to answer questions. on September 12, 2019 Alexa Answers was live to all with an Amazon account. On the announcement Amazon said the feature was well-received by the early community of invite-only participants, who have contributed “hundreds of thousands of answers that have been shared with Alexa customers millions of times”.Specimen of the Alexa Answers website 2019To differentiate these answers from other Alexa responses, they’re attributed to “an Amazon customer” post-amble. This attribute brings along many positive and negative aspects we will surface later. However it is safe to say that this post-amble diminishes authority in the answer and Alexa as a system of authority as a whole.During the launch Amazon stated the basis for Alexa Answers was: “there are thousands of answers that had previously stumped Alexa, like ‘Where was Barbara Bush buried?’, ‘Who wrote the score for Lord of the Rings?’, ‘What’s cork made out of?’ and ‘Where do bats go in the winter?’”. It would seem that Alexa should have the ability to answer some of these questions based on Wikipedia searches or the default Bing searches.Some experts feel this is a Knowledge Graph problem and that if the Knowledge Graph was large enough all questions can be answered. This is the same thinking that assumes we will need General Artificial Intelligence or pass some Turing Test for useful Voice First AI, history will show this approach was painful wrong and painful obvious it was in hindsight.As mentioned you must have an Amazon account to participate in Alexa answers. This turns out to have a filtering effect of some bad actors and some abusive passive-aggressive type answers. Many answerers would likely not want to jeopardize their Amazon account with attributes of high negitivity. Even though you can choose not to use your real name in Alexa Answers, Amazon still knows who you are. Yet this element has a huge opportunity for Amazon beyond the the filtering effect, the answerers are also Amazon customers and obviously the shopping habit correlation is interesting, self released data, it also can be used to highly motivate good answers and good answerers in ways that space in this answer does not permit me to explore.Amazon has a history of crowdsourcing book and product reviews since its inception. This experience is both a blessing and curse as they do help the company, on the surface understand some elements of crowdsourcing ecosystems, it also creates the illusion that book/product reviews are somehow similar to answering a question, it will turn out not to be this case.Specimen of the daily bonus questions from Alexa Answers.Once signed in you will be presented with eight general categories of interest/expertise which will then generate a list of questions available. You can choose to filter them based on attributes like “most frequently asked,” or “newest” or by other category areas. You have just 150 characters to answer the proposed questions. It usually takes a few hours to be accepted by Amazon and published. This is easily considered short form questions and in most cases it is all the space we would need. However I discovered cases where this is clearly not the case and thus inaccurate or incomplete answers will result.My research shows that verbosity in response and if the user would find this a good response is based on the Myers-Briggs Indicator (MTBI) of the user. I started building MTBI aware systems in the late 1980s and can extract with high resolution the likely MTBI of the user and thus reflect a correlating MTBI of the persona and answers the Voice First system presents. We can see that although sounding complex just knowing and presenting the MBTI correlation of the user can make an answer more useful and can also help understand the meaning of a question in real-time.Specimen of Alexa Answers community leaderboards.After you submit an answer, you earn points toward monthly and weekly leaderboards and badges based on how many questions you’ve answered, how many times it’s been shared with Alexa users and more. This sort of game mechanics initially serve as a motiving force but for many it loses power over time as more people fill the system with answers. This is similar to the gameifaction used in Amazon Book/Product reviews. Amazon says:“This new feature is just one example of the many ways we’re continuously working to grow Alexa’s knowledge. As always, we’ll continue to evolve the experience based on customer feedback”.Specimen of Alexa Answers levels and points.Although game mechanics have a point in just about everything, they can also serve as a big distraction and cause actions that may not generate quality answers or answerers. We can learn a great deal form the errors of Yahoo and Google form this element.Specimen of Amazon Technical Turks website.I am told Mechanical Turks [7], an Amazon service used the same interface as Alexa Answers. It is interesting that prior to Alexa Answers Amazon paid Technical Turks to answer Long Tail questions. We can speculate if this was successful or not successful as a method. With Alexa Answers there are dashboard and leaderboards showing how much your answer was used by Alexa and how often it was “Star” rated is standard gameifaction. When an Alexa user is presented with the answer there is a random response after the post-amble that states “did you find this question useful”. the issue with this type of feedback is that it is really not a true form of feedback it is entirely situational and subjective. Complicating the gameifaction is the community of answerers can vote a 1 - 5 stars for your answer. It may also be automated by the game mechanics systems Amazon built to motivate better answers. However this is one area where potentially bad actors can form groups, and they do on all of the Q & A sites and vote down certain answers and certain answerers. This has a sort of chilling effect on the results that are generated. Additionally by not using a true Likert Scale [8], this is a type of rating scale used to measure attitudes or opinions, the binary helpful/not helpful is almost useless as it manifests as a thumbs up or thumbs down on the Alexa Answers dashboard next to the answer posted.I have found ways to perform a Voice First AI Likert test that is not obtuse or obvious as a post conversation the system uses to understand and create true feedback. And of course asking an overt 1-5 Likert would be not useful. Of course I do not overtly ask for a Likert score directly but I can arrive at one indirectly through follow up questions and answers.Specimen of the Likert Scale Questions In Textual Form.So we are stuck in a world of crowdsourced answerers judging of answerers and the answers they produce along with the binary “useful” or “not useful” vote by the user. Both signals are ambiguous beyond the obvious and overt incorrect or bad faith answer. Yet there are better ways to determine this beyond crowdsourcing.So on the surface this seems to be a very simple and straight forward Q & A system with Alexa questions serving as the basis for the answerers to respond to. Yet this is not nearly the same feedback loop that exists on successful long for Q & A sites like Quora. Although not immediately obvious to even people that are experts in this field, there exits a vital feedback loop that allows for a better answer. The quagmire starts with the fact this is a post processing result, meaning the original questioner will likely never ask the question again, for a number of reasons. Thus we have a brach in the taxonomy that may or may not bare any fruit. Now there are a number of ways for Amazon to solve this problem, but this is not the approach that will prove to be successful for many more reasons, this is just one.Thus Alexa Answers can be seen as best, as only a way to post process a question via crowdsourcing. In my analysis which we will explore later on, we will surface many examples of questions that will statically fall below the crucial “noise rate” below .01% likelihood. Even more interesting is the face many of the questions are in a format that can easily be answered by a protocol not the path Amazon and others take, I call “rote machine learning”. One way to understand this is how you recognize a bird. When you were first introduced to a bird you formed a protocol on what an ontology of living creatures are like. You further a taxonomy of bird-like over a very short period of hours to days. You did not need to see 10,000 images of a bird in various light settings for an array of reasons, many are related to the tens of thousands of protocols and sub-protocols that run anytime you are greeted with the new. Some think of this as pattern matching and that is a very low resolution way to present what is happening.This is part of systems I have built I call The Intelligence Amplifier [9] and one of the fundamental aspects of the system is to extract intents and meanings and thusly extract answers based on the protools indicated and the information the intelligent agents returned in real-time.BOOM!Meet https://t.co/qdKFgeC498 her first time heard in public. She is particularly adapt at surfacing information not found on the Internet or quite hidden.She is a 100% proactive system that uses a patented notification system.#TheIntelligenceAmplifier is many folks. pic.twitter.com/twolfpNkK1— Brian Roemmele (@BrianRoemmele) August 25, 2019Specimen of Agatha.Best one of The Intelligence Amplifiers performing research.The Perils OF The Voice First Fence LineFrom the user’s experience, when Alexa does not know an answer, a fence line is erected in the mind of the user each time they hear “I don’t know that” or “I cant do that” it is profoundly difficult to change the boundaries, once established after enough of these events. This is why most Voice First platforms will meet with issues that may cause extremely constrained use cases over time. It is also why the companies that build these platforms need to acquire very astute people that have understood this issue for decades, not just since 2008 or something. I call this empirical praxis and the hiring practices of all the companies that build Voice First platforms are usually limited to degreed computer engineers, AI engineers and linguists experts. There are exceptions but theses are the cohorts building these systems. There is absolutely nothing wrong with these professions, indeed they are needed, but there are hundreds of experts that would be shown the door before HR even has a chance to speak with them.How To Answer Any QuestionAlexa Answers was built with the premise that “long tail” questions require human respondents to extract meaning. It is an engineering solution to a human problem that is not fully understood by the engineer. It is classic ex post facto designing where the loop is never closed for the user. Thus we will have an ever widening array of questions that seem unique but have a protocol basis that could allow it to be answered in real time, with no knowledge graph or AGI. In fact this is precisely what I did. Using protocols I developed in The Intelligence Amplifier “I’ answered a few thousand questions across 16 test accounts (don’t ask how I established these active and legal Amazon accounts, but no ill will was involved). All of the questions were processed by the protocols and produced answers that were accepted by Amazon and distributed through the Alexa universe. In effect, The Intelligence Amplifier took questions Alexa could not usually in seconds. I just redirected the answers via a simple script back to Alexa Answers. I had to throttle the system as at some point 30 answers a minute would raise suspicion.I did manually answer a few questions to test the fence line and boundaries of the system. There were interesting artifacts I discovered in just the questions I answered and did not allow The Intelligence Amplifier to answer. In one case, that I can not publish the question or answer for a number of reasons, non are illicit in any way, just proprietary, I found that many “5 star” votes to this obscure answer and a 5 day delay before Amazon published it.I been building databases of questions typically asked since The Usenet Oracle in 1989. Thus with all the Q & A sites of have been a part of or conducted research, I have accumulated over 16 million questions and a percentage of answers. This has allowed me to empirically draw on over 30 direct research databases. I have conducted a longitudinal study of this research over the years and recently created a new study that includes what I have found with Alexa Answers.Ontologies And TaxonomiesThere can be all kinds of taxonomies in an ontology. however ontology attempts to describe and capture an entire subject area, with all of its complexity, whereas a taxonomy tries to simplify a complex collection of seemingly unrelated items into a linear, organization. We can think of the way Alexa’s category system works as an ontology, in as much as the actual question is captured in its complexity but it flows to eight taxonomies. In a real sense the top level of categories could be seen as a true taxonomy, I will use ontology labeling for a number of reasons. One reason is that each question could and should form a true ontology to be correctly answered by a correctly established protocol to categorize the elementals of the question so as to reuse it in future question / answer pairs.By far a significant number of short form resulting questions on the top of ontological tree can be directly based Wikipedia as the primary source. All of the Voice First platforms use Wikipedia resources and Wolfram Alpha as a typical answer source. In the case of Alexa, my research shows that the taxonomies flows from eight primary ontologies:VideoGeographyScienceHistoryMusicFoodLiteratureSportsInternally there may be more top level ontologies, but these are the eight ontologies that have been exposed currently during my research. Even a cursory overview will establish that the whole of possible questions can not be easily held in a top level ontology that Alexa appears to use. Thus the Science category has some odd additions forming a sort of “kitchen sink” catch-all for question categories that do not fit the very limited ontologies.Twenty three percent of the overall questions asked by Alexa users are in the Science and Food categories. It is important to understand there will be distortions in overall questions in as much the Alexa Answer questions show questions that in theory Alexa did not provide a useful answer for. However in my studies of Q & A sites I feel we have a rather robust cross section of Alexa question situations.The distribution of question ontologies do not completely reflect the relationships of the types of questions posed to Alexa, however we can use this data as a rather useful starting point.Specimen of research by Brian Roemmele on the distribution of 5000 questions on Alexa Answers.Why are these ontologies important? They serve two primary functions:In real-time the can establish a guide to the possible answers to ineffectually posed questions.In answer misses, they can serve to notify answers on Alexa Answer the category they selected.Consider the Science ontology, here are two specimen questions presented by Alexa Answers :A) What's the freezing temperature of ice?B) How many squirrels are left in the world?Although it is possible to categorize item B as “science” it is really more of a trivia sort of question that likely has no exact answer. By grouping these types of questions together, there limited ontologies get distorted. Most trivia question can have some statical science involved but they do not truly belong to the science tree. I use over 6000 top level ontologies based on some dozens of studies, real-life testing and the database of Q & A sites I have amassed.Consider this axiom:The fine detail to anything is just 90 steps away.How many times do I need to cut a Cherry Pie before I get to a single atom?It takes ~90 successive cuts.Although the long tail of questions look to be an endlessly complex problem, and to be sure it is if you use the approach every Voice First platform uses, it is reducible to the logical in less then 20 “cherry pie cuts”. And with context of the flow of questions before and other telemetries it can be achieved in many cases with 5 questions to the questioner. Thus if this sort of protocols were instituted, there would really be no need for human answerers in many cases.A Deep Statistical Study Of Questions Pose By Alexa UsersSo lets dive deeper in to the rich insights we can derive from Alexa Answers. There are many anthropological, sociological and business perspectives and insights I have drawn from my research. Although this is on going research the early results are nothing less then fascinating. Alexa Answers as a research tool offers astounding possibilities with the private questions of millions of users presented. Have no doubt they are deemed private questions because there is no overt preambles telling the Alexa user that the question they just posed would be seen by a crowdsourced answerer community. Now can one see anything particularly useful or personally identifiable? In a very limited case the answer is yes. It is not intentional by Amazon but an artifact of how they process and present questions. I will not publish what I discovered publicly, but it is not nearly a big concern, just a general one.To understand intent(s) and meaning(s) of the question we use a process that is engrained into our conscious and sub conscious. But it is designed to be nuanced by interaction and not typically binary Question Answer. To help currant AI systems extract intent and arrive at meaning we typically use a linguistic break down of sentence in to semantics. We use a Reed–Kellogg system or parse tree of the question and typically can be broken down by functional parts:subjectobjectadverbialverb (predicator).nounsThe subject is the owner of an action, the verb represents the action, the object represents the recipient of the action, and the adverbial qualifies the action. The various parts can be phrases rather than individual words and relate to people, places and things. We use a sentence diagram that is a pictorial representation of the grammatical structure of a sentence. The term "sentence diagram" is used more when teaching written language, where sentences are diagrammed. The term "parse tree" is used in linguistics (especially computational linguistics), where sentences are parsed. Both show structure of sentences. The model shows the relations between words and the nature of sentence structure and can be used as a tool to help recognize which potential sentence is actually a sentence.Simple sentences in the Reed–Kellogg system are diagrammed according to these forms:Specimen of the Reed-Kellogg system of parsing.The Intelligence Amplifier uses a meta parse tree indirectly based on the Reed Kellogg systems and establishes a first pass numeric determinism to the question and ultimately to the answer it presents via a feedback loop similar to Likert Scale and also context of preceding questions. The numeric values are:In band +1 - 9Out of band - 1 - 9Obtuse * 1 - 9Gibberish ? 1 - 9I use this scale along with ~92 others to help classify the ontologies and correctly extract intents and meanings. With out correct intent extraction and a road to meaning, the question and the answer is of course meaningless.The Alexa Answers Baseline StudyFor this paper I will use a sample set of ~5000 questions extracted from the Alexa Answers site over the period of 6 months in early 2019. A very curious insight I found early on, is that many questions actually have Alexa answers and they did not need crowdsourcing to arrive at an answer, and in and of itself, this is interesting.Here are some sample questions and the associated ontology Amazon applied:What's the perfect temperature to brew coffee at?Food +1How big is an atlas moth?Science +1Where do honey crisp apples grow?Food +1Did michael jackson go to college?Music +6What was captain kangaroo's real name?Video +1How tall was edward the first?History +1What is the average weight of a motorcycle?Science +7What country has the most females?Geography +9How long is cat's been on broadway?Video +4What is tiny turtles real name?Video +9How long is the continental united states?Geography -1Are sharks warm blooded?Science +1What's the name of robin hood's hideout?Video -6What city has the most snow fall?Geography +5Who is danny treasure?Music +9What is the national dish of greece?Geography +4What is the city with the least population?Geography +1Where is liverpool from?Geography *9Who is the prime minister in nineteen forty two?History ?9What should i do with used cooking oil?Food +9How many calories in a tube of pringles?Food +7How long does cauliflower cheese take to cook?Food +5What is gerry adams in the i. r. a.?History +7What is the average lifespan of a u. k. mail?Science -9 *2Who sings someone's knocking at the door?Music +5How many series of friends are there?Video +9How fast can a staffordshire bull terrier run?Science +9How do you cook a roast dinner?Food -8Who invented the custard cream?Food *5How many disabled people are there in the u. k.?Geography +4In this very small sample we can already surface many interesting if not astounding insights on who is using Alexa and how they are using the system. We can also create an “Expectation map” of what questioners are assuming is possible with Alexa. I could write many books on this general subject, but the Expectation Map, not used by any Voice First platform is very powerful. We can also see the obvious miscategorizations (and you cant change them when answering the question) of ontologies that will do nothing in helping Alexa find a useful answer.What appears to be random questions, I have discovered after analyzing million of questions since the 1990s there are “attractors” and “format types” that repeat endlessly. One format type example is the “How tall/heavy/old/etc is /famous person/?”. The categories are littered with these questions and can be answered using even the crudest AI /ML tools Amazon has. Not only do these questions require their now ontology, they also show a mindset and mentality of the questioner. Although “shaming” words are removed from the questions, they are implied and in some cases are passed through.Consider:How much does jean claude van damme weigh?VideoNot really a video ontology other then the fact he is an actor. This is a format type question in the “How tall/heavy/old/etc is /famous person/?” structure. The parse tree is simple. But lets examine the psychology and perhaps sociological aspects are interesting. Why do we see about 7% of sample questions in this category?Consider another format type question:How tall is john berry?VideoAgain not in the video ontology but we also have a potential common proper name spelling issue. Is it really “John Barry” of was it John Barrie an English actor who appeared in a number of television shows and films who became well known for playing the title character on the police series Sergeant Cork from 1963-68 and playing Detective Inspector Hudson on Z-Cars from 1967-68? Or did the questioner mean J.M. “James Matthew” Barrie, author of “Peter Pan” who like his character in his novels, was of a diminutive height. We don’t know because the questioner is long gone and all that remains is the question. However The Intelligence Amplifier had an In Band +1 assumption It was related to Peter Pan. We may never know as the questioner is long gone, all that remains is this question that to this day, Alex is stumped on.The format type questions all are In Band but many are Out Of Band or even Obtuse. Some reasons are based on subject, object, adverbial, verb and noun errors or miss readings by Alexa.Consider:How does an egg get fertilized?FoodThis of course should be categorized with the limited eight ontologies as Science yet it is Food. Indeed eggs are food, but what was the intent of the questioner. We can use format type here also in the “How does /thing/ /get/do/ /effect/”. Although this seems a more complex format type it really isn’t. We are really dealing with just eggs and the effect of fertilization, we assume food grade eggs but this may very well be a human reproduction question. How do we know? Context. Thus it is likely Alexa answered this as a food question and that was not satisfactory because it may have been in the ontology of food and produced a bizarre response to a human reproduction question.The Intelligence Amplifier, based on millions of questions and format typing found this to be a human reproduction question not a food question of edible eggs. Thus it seems clear why it became an Alexa Answer. Yet this miscategoization will prompt answerers to continue to present “wrong” answers because of the ontology.I have discovered in my research ~3000 format type questions that can cover about 80% of questions in my databases. Now understand I am just some guy in a garage but it is interesting how attractors form around certain subject matter. This of course is usually beyond linguistics and computer science professional domains and shows the blind spot in build Voice First systems using the “usual suspects” and lacking empirical praxis.Beyond format type questions there are other types that could lend itself to visual reinforcements.Consider:Where is denmark on the map?GeographyThis should not have been an Alexa Answers question for many reasons. Even if the questioner had no screen to show the map it can be said thusly: “Denmark, is above Germany and surrounded by the North Sea west of Sweden”. Did you need to see a map? Perhaps if you are not familiar with geography to any deep level. This question also show that a vast majority of questions are not “fact” but “feel” questions. For example we could have giving latitude and longitude data and digressed to highly localized geographical milestones, but this is to a high statical percentage not really what the questioner really wanted. They wanted a “feel” of where it was on the globe and subsequent questions could explore details if desired. But programmers and linguistics experts in group meeting would not agree to a “feel” for a question but will be biased to “facts”. And trying to find “facts” and not a “feel” for a question, even Wikipedia failed to find the fact needed. Is this a small issue? Not by the least the “feeL’ questions represent over 61% of questions I analyzed from Alexa Answers.Consider:What is the most thoroughly research nutritional product in the world?ScienceandHow many calories should you have for breakfast?FoodThese questions don’t have an objective fact they have a subjective insight. The questioners are not looking for anything really exact because it is likely they know there is no universal direct answer. The answers are situational to many elements including the health of the person asking and where they are located in the world. So a “fact” approach by machine, Wikipedia or a human answerer would yield a cascade of less than useful answers over time that will not find a means. But if a “feel” approach is used one universal answer or format can be achieved. One answer vs. thousands.As mentioned before there are many questions in my research data from Alexa Answers that simply do not need to be presented to answerers. Some out easily classified as Out Of Band and Obtuse and others are easily answered from Wikipedia and Bing.Consider:Is the universe still expanding?ScienceAndWhen did bts debut?HistoryAndWhere does mars come from?ScienceAndAre there different galaxies?ScienceAll of these questions waste the time of Alexa Answerers and have simple answers, yes even the BTS questions. So why are they here to be answered? We can only speculate but this along with many other observations I can conclude some serious errors of how Amazon is approaching how they extract intents and meanings.There is a cross section of psychological states presented by the flow of questions found at Alexa Answers.Consider:How do i convert baking soda to baking powder?FoodWhat is the zip code for wenham massachusetts?GeographyDo deers have whiskers?ScienceWhat year was the waco fire?HistoryHow many people live in wild?GeographyHow many ribs does a t. rex have?ScienceIs the tower of babel still standing?GeographyHow much does the heaviest cat weigh?ScienceWhere was the beatles last performance?MusicWhat are the properties of oobleck?ScienceWho is johnny carson's sidekick?VideoWhat year did microwave ovens come out?VideoWhen did naruto first air?VideoWe can conclude that some questions seem to be trivia but on deeper analysis they are not. Some are historical or scientific “facts” but without the context of a conversation one could answer these questions in a manner that would not really reach the essence of the question. Clearly there is no reasons for zip code questions to appear at Alexa Answers. If this was a failure of the Alexa platform it is glaring. I have seen dozens of zip code questions.We can also conclude the location of the Alexa device by virtue of the types of questions rendered. We can also get a feel for when these questions are being asked over the course of the year. It would be helpful for Amazon to show a date and time.Consider:How long will it take to cook a five kilogram turkey?FoodAndWhat temperature do you cook a twenty pound turkey at?FoodAndHow long does it take to cook a twenty eight pound stuffed turkey?FoodAndHow long to cook a sixteen pound stuffed turkey?FoodIt does not take much to conclude these questions are being generated to a high statical percentage around holiday seasons. Likely Thanksgiving and Christmas holidays in the US. You may have noticed a format question here and you are correct. This is a “How long do I cook /Food/ that weighs /weight/?” with variations there are 100s of these questions. There will be hundreds of Alexa Answerers producing specific answers in rote to the endless weight combinations. It is really quite shocking these types of question, weighed to almost the largest percentage has not been automated. It is elementary.The type and range of questions may seem random to even expert observers that do not study this deeply to any level. Yet there is a rather clear psychology behind many questions and attractor patterns are easy to see with even limited data.Consider:What is the most played video game in america?VideoAndWhat are french hens?ScienceAndWhich state has the least amount of natural disasters?GeographyAndWhat color is dove cameron's eyes?VideoAndDo you have a recipe for pumpkin seeds?FoodAndDoes pumpkin pie get refrigerated?FoodAndDoes sweet potato pie need to be refrigerated?FoodAgain, we see holiday cooking. More exactly some post holiday cooking. We also see questions that relate to fears (natural disasters) and human interest (eye color). There is much to be said about post meal food preservation, in particular pies (were they homemade or preserved unrefrigerated store bought pies). What is the “pie psychology?” here there are obvious business cases here but there is also mindset aspects. We also see mindset aspect with the utilization of pumpkin seeds, likely related to making a pumpkin pie or Halloween carving, again dates and times of the question create at least some limited context.The psychology of the questioner can be extracted by the Affect protools of The Intelligence Amplifier and from this some correlating demographic and age cohort data can be extracted. From a sociological and anthropological basis I have also found a rich treasure trove of data. We know the general psychological state of the Alexa user, and how this will play out over time in sociological and anthropological studies. Part off my research creates a statistical word use study that aids in understanding the sentiments of the users. From a fundamental level this data can be used to make Alexa work better. Indeed I have used this data to make The Intelligence Amplifier work better.We can also find opportunities for new businesses and in the case of this report over 27 points of Clayton Christensen disruption either by Amazon if they discover it or by startups or legacy companies if they discover it. I can not over state how many opportunities have been surfaced just by this small study of Alexa Answers. Just by knowing what people curious about and how they ask these questions is an astounding opportunity.ConclusionsIt may be easy to read this presentation as an attack in some way towards Amazon, the Alexa team or the AI community a a whole. Read this clearly: it is not. In fact one reason I requested to release some of this report to the public is to in some way hope to help them. Many ask why I am not working in these companies and could do this privately. Well the short of it is: I really don’t know other then the fact I may be “un hirable” by the current hiring practices at play in technology companies. Even though I could pass the “prove you can code before you can come aboard” hazing tests at play to this day, I am still admittedly a Round Peg to their Square Hole.It can be argued that some of this information is known to the Voice First companies, however I can say with certainty via private conversations most of the information and solution presented here is not widely known for if it were it would not have fallen through to Alexa Answers. It is also very clear this is not just an Amazon problem, Google, Apple and others, I know from other studies have the exact same problem to a greater or lesser degree along with other problems.The Voice First revolution is relatively new. Answering the “long tail” [10] questions and finding the correct short form to long form answer is new for many of the people building these systems. Without empirical praxis and using purely and engineering approach solving these issues. What do you do when you don’t know the answer to a question? You find the answer. The next generations of Voice First systems will find better answers.The Case Against General Artificial Intelligence And The log2(n) - n ParadoxOne of the great debates raging in the tech community is that Voice First systems like Alexa, Siri, Cortana, Google and others are too “dumb”, “they can never answer every question”. The truth is, of course they are. These generation one systems fail outside very constrained domains. Yet using the same criteria to humans, we all look pretty “dumb”. The invention of writing, the printing press, the book, the floppy disk and the Internet allow any properly equipped human to answer any question, or get a fairly good “feeling” of the potential answers.It turns out humans think in a “fuzzy” way. Our answers, as logical as they may seem sometime may have foundations in logic but they are fuzzy in the way they are translated to humans. Most humans chose not to speak in segments of facts connected together and if we do, we suffer a life of loneliness. Humans use analogy and reference to express things. Many researcher and observers assume this is exformation (information to be discarded). However the things we use to present concepts, ideas, even commands have a multiplex quality to how it was said.Thus when we are asked a question it is quite natural for us to simultaneously decode the multiplex layers of context. Many times we will ask additional questions to full understand what was said or the command. However overtime humans learn by doing and tend not to ask rebound questions, we apply prolix, we speak in shortened sentences and we take the sentences as a meta command or meta idea.For computers to solve the what I call the log2(n) - n paradox (or the Evan’s paradox) they need to deal with the Fuzziness of humans and turns out that it is not the “insolvable” problem that even learned experts suggest. The Turing Test was first proposed in 1950 by Alan Turing in the paper “Computing machinery and intelligence”. Turing is commonly acknowledged as the father of artificial intelligence and computer science and later developed the Imitation Game as a substitute for the question “Can machines think?”. The Turing test is interesting but is not the baseline of how future Voice First systems will deal with the log2(n) - n questions and log2(n) - n answers almost seemingly impossible task. Moreover, Turing’s intentions were never to use his test as a way to measure the intelligence of AI programs but rather to provide a coherent example to help arguments regarding the philosophy of artificial intelligence.The idea that any AI system has to anticipate every question and produce a ready-made answer is not only unachievable it is a complete misunderstanding too the way humans work. No system will be omniscient, omnipotent and omnipresent. No human, even with the assistance of the internet can address log2(n) - n questions and log2(n) - n answers. Thus to pose a premise that not living or a mechanical system could achieve as an argument against the utility of Voice First systems is an argument under a false premise.Thus Alexa Answers does not seem to have the successful game mechanics that has made Quora successful. This is vitally important for this system to work on a long term basis and not face the Yahoo or Google fate. Additionally we can conclude even if Alexa Answers is successful it will not nearly address the systemic problem at play here either use humans to answer questions or go for a AGI Turing test approach. The reality is it is not a binary choice. Using simple but very powerful protocols that you and I use every day would give the next generation of Voice First devices the ability to answer any question.Sites like Quora and systems to distill long form answers to the needs of the questioner in real-time will be of a tremendous step forward for Amazon. We will always need Q & A systems that extract and utilize first person knowledge that inherently is more fuzzy feel of information and insight than Wikipedia-like fact. This is how all cultures work through history. Indeed this is even how science works. We first arrive at the feel of a subject and then drill down to facts after if this is desired. Humans get mired down with data and not directionals of feel. Some may argue this is a fundamental flaw in humans and may use it as an inditement of our current condition today. This is a mis attribution. What humans need is a feel for the directional to help find the overview and then generate the supporting facts.We do not see the facts of the 2 x 4s and sheet rock of the outside of the house, we see the overall structure of the house and can draw many conclusions about what functions it serves and perhaps the type of rooms it contains, because the direction feel of the subject has been determined, it is a house. Today we have the builders telling us how the house was made and what it is made of. This is natural for people that have to form logical program functions in code. But as this though pattern matriculates up to the product, we actually wind up with a system that has far less utility then it could.Specimen showing Voice First is the fastest adopted technology in history.Voice First is the fastest adopted technology in history. No technology has even come close, not the smartphone or the tablet. Amazon has a majority market share with Alexa. This is deserved and earned. Yet we have not seen anything yet, this is the very early days of the Voice First revolution and even with some of the issues I presented here on Alexa Answers persist.Alexa will continue to do outstanding. Amazon is one of the few companies that get what this technology represents and has dedicated over 20,000 people to make it better. No matter what, we can expect Voice First to get much better. We are at the precipice of a new way to communicate with computers and an new way to arrive at data, information, knowledge, insight, wisdom and understanding. In the past we had to sift and sort through data and information in hopes of some knowledge. The voice First revolution with the correct protocols will bring about insights and hopefully more wisdom and understanding. We are awash in data, we need more wisdom and understanding.[1] Alexa Answers[2] There is A Revolution Ahead and It Has A Voice[3] Brian Roemmele[4] Brian Roemmele's answer to Is Amazon Echo (and/or Siri and other voice assistants) actually useful, or is it just a novelty? Are usage and retention of these products growing?[5] Brian Roemmele's answer to Is Quora important to Siri?[6] Brian Roemmele's answer to Is Siri important to Quora?[7] Amazon Mechanical Turk[8] Likert scale - Wikipedia[9] http://VoiceFirst.ExpertMore details on the Alexa Answers Product.What is Alexa Answers?Alexa Answers is a community where customers can answer questions for Alexa. It contains questions in categories such as Science, History, Literature, and Music which Alexa would like your help in answering.Where do you get questions from?The questions on Alexa Answers are facts Alexa customers want to know. If enough customers ask Alexa a question she cannot currently answer, that question may be published on Alexa Answers.How can I add answers to Alexa Answers?Just browse the categories, select a question you would like to answer, type your answer and click submit.How is my Amazon profile displayed and can I contribute anonymously?You have the option to contribute anonymously or display your Amazon profile information to other users on Alexa Answers. If choose to display your profile information, your profile name and profile image will appear on your answers and throughout the site. It will not be shared when Alexa delivers the answer to an Alexa Customer. If you choose to hide your information, you will appear as “Anonymous” to other users. You can manage your Amazon profile visibility in Settings.What makes a good answer for Alexa Answers?Good Alexa answers respond to the question briefly, directly, and accurately, in a contributor’s own words. They do not contain any content that is obscene, threatening, defamatory, invasive of privacy, or infringing of intellectual property rights (including publicity rights).What answers are not accepted on Alexa Answers?In order to generate helpful responses for Alexa to use when she is asked questions in the future, answers must be kept to under 300 characters. Answers may also be automatically rejected if they contain any of the types of objectionable content listed above. For additional information about what content is and is not accepted on Alexa Answers and other Amazon sites, please see Online Shopping for Electronics, Apparel, Computers, Books, DVDs & more’s Conditions of Use.What happens after I submit an answer?If your answer is accepted, it may be made available on Alexa next time a customer asks the question you answered. If more than one contributor answers a specific question, Alexa may rotate between answers until she gains enough feedback to determine which answer is the most useful. You can tell that your answer is available for Alexa to use if the blue “LIVE” icon appears below your answer.How do I report an offensive or bad answer?You can flag an answer on Alexa Answers to report them. Answers can be flagged if they are inappropriate (subjective, advice, vulgar, insulting, or offensive), incomprehensible (it doesn’t make sense when read out loud), or incorrect (not factually correct or relevant). Once flagged, the answer will be hidden from your feed and reviewed.What does it mean if my answer is flagged?Your answer was reported by members of the Alexa Answer community and/or Alexa Customers. Answers can be flagged if they are inappropriate (subjective, advice, vulgar, insulting, or offensive), incomprehensible (it doesn’t make sense when read out loud), incorrect (not factually correct or relevant) or not helpful (it's received too many downvotes from Alexa customers). Flagged answers are not visible on the Alexa Answers site and not shared with Alexa Customers. You can rewrite and resubmit your answer to address the reason for flagging. If you think your answer was flagged erroneously and would like to dispute it or get more clarity, you can contact us at email and we will review your answer and provide a response, if appropriate.What do the thumbs-up and thumbs-down numbers below my answers mean?When Alexa uses your answer, she may also elicit feedback by asking 'Did that answer your question?' The number of likes reflects the number of times customers said YES to this question. Similarly, the number of dislikes reflects the number of times customers said NO. (As noted above, if enough customers say NO your answer will become blocked and will no longer be used by Alexa).How are the leaderboards calculated?Leaderboards show the top ten weekly and monthly contributors on Alexa Answers. The leaderboard ranks contributors based on points earned on answers submitted within relevant category and time period. Monthly leaderboards reset on the 1st day of each calendar month. Weekly leaderboards reset every Monday.What do the point values mean and how are they calculated?Each answer is given a point score based on the number of times Alexa shares the answer and the quality of the answer. Answer quality is determined by Alexa customer feedback. When an Alexa customer hears an answer they can tell Alexa if the answer was helpful or not. The more positive upvotes an answer receives the more points it earns.I want to correct my answer, how can I edit it?You can click the three dot button to reveal the edit and remove actions. The sooner you do this after you submit the answer, the less likely you will lose any feedback your answer gathered.What if I submit an answer by mistake? How can I delete my answers?You can click the three dot button to reveal the edit and remove actions. Remove means your answer will be no longer be used by Alexa and all the statistics associated with the answer will also be deleted.What does Alexa Answers do if there are multiple answers contributed by the community for the same question?If more than one response to the same question is provided on Alexa Answers, Alexa will rotate between each of the answers until she has enough information to know which answer is the most useful. You can help Alexa choose the most helpful response by providing feedback. Answers with most positive feedback will be used more frequently.What is a HOT question?A HOT question is a popular question on Alexa. It is also more likely to get asked again by other Alexa customers.What are star ratings?You can help Alexa Answers be great answers, by rating answers submitted by other contributors. A 5-star answer is a high quality answer. Alexa can use contributor ratings, along with feedback from Alexa customers, to determine when and how often each answer is used.What should I do if I believe someone has used my intellectual property to respond to a question that appears on the Community Answers page?You can report intellectual property infringement by sending an email with the following information: (1) A physical or electronic signature of the person authorized to act on behalf of the owner of the copyright interest; (2) A description of the copyrighted work that you claim has been infringed upon; (3) A description of where the material that you claim is infringing is located on the site (for example, a screenshot or copy/paste of the question and answer you claim is infringing); (4) Your address, telephone number, and e-mail address; (5) A statement by you that you have a good-faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law; and (6) A statement by you, made under penalty of perjury, that the above information in your notice is accurate and that you are the copyright owner or authorized to act on the copyright owner's behalf.What happens when I follow a question?When you follow a question, you opt into receiving Alexa device notifications on that question’s answer activity. Click the star icon to follow a question. After following a question, Alexa will notify you through your Alexa device if another contributor submits an answer within 7 days.

Who are the participants of a 180 degree appraisal?

Traditionally, a 180 involves a manager getting feedback from their team. That means the direct reports of a manager give feedback and answer questions about them.A 360, which is a bit more common in my experience, involves anyone that interacts with a manager, so usually their boss, a few of their peers, and of course their team.While a 360 can provide a lot of value to understand how someone is doing to live up to company values, and collaborating with others, its weakness is it’s broadness. Questions in a 360 are usually pretty generic so that peers as well as your team can answer the same questions.The beauty of the 180 is in its specificity.By only asking a manager’s direction reports for feedback on their manager, you can get very specific with your questions. That means you can better evaluate if a person is specifically a good manager.Learn from Google’s Management Research.In Laszlo Bock’s NY Times best-selling book, “Work Rules: Insights from Inside Google,” he reveals many of the secrets to Google’s success with their people, culture, and managers. He shares their scientific methodologies to their internal studies and many of the great findings they uncovered. He’s even pretty honest about some of their failures.One such discovery was the set of questions they use for team members to evaluate their managers. This helps Google reward their best managers, and help improve their worst ones.Unlike their million dollar stock grants that only mega companies like Google can offer, this is something every company can use to help improve their managers.Let’s take a look at what those questions are, and where evidence beyond Google supports their value.Google Management’s UFS (Upward Feedback Survey)Below are the questions Google uses to measure their managers as described in Bock’s book. (Specifically page 197 in Chapter 8, for those of you who want to read more for yourself.)A few caveats: The details of how Google runs these surveys is important, so here’s a few notes from the book:All questions are scored on the Likert scale of 1 to 5. Strongly Agree, Agree, Neutral, Disagree, and Strongly DisagreeThey score the managers based on the percentage of scores that are favorable. Strongly Agrees and Agrees are considered favorable.Managers are compared to their previous scores and the average Google manager score that time. So they can see if they’re getting better or worse and how they compare to othersThe surveys are 100% anonymous and do *not* impact performance management evaluations This avoids managers gaming the surveys and pressuring team members to not answer honestly.Many managers go over their results with their team It starts a dialogue about things that the manager can specifically change, flipping the usual manager-employee relationship.Now that we know the questions, let’s look at why they work.The Table StakesThree of the questions aren’t all that surprising. They’re the table stakes that any company is likely to measure their managers on: competence and execution of tasks.My manager keeps the team focused on our priority results/deliverables.My manager communicates clear goals for our team.My manager has the technical expertise (e.g. coding in Tech, accounting in Finance) required to effectively manage me.Most companies can quickly identify a manager lacking significantly in any of those 3 areas. And in most companies, that’s usually the only thing they measure. That’s why the other six are the most important and worth further examination.6 Surprising Questions that Ensure Effective Management at GoogleWith the obvious questions out of the way, let’s look at each of the other ones in more detail by exploring what other supporting information is out there beyond Google.1) “My manager gives me actionable feedback that helps me improve my performance”Delivered well, feedback can help you become much better at the work you do, which will help you advance in your career. The problem is, most people don’t get enough of it. And managers are generally the person who can best provide it, so when you’re not getting enough, it’s largely their fault.While the data Google used to create their survey question is now years old, we know they were ahead of the curve. Millennials are a generation that craves more frequent feedback than any generation before, as this study from SAS revealed earlier this year:With Millennials now the largest demographic in the workforce, managers must invest more in providing feedback. The best way to ensure that happens across your company is to make it a part of how you measure your managers.2) “My manager does not “micromanage” (i.e., get involved in details that should be handled at other levels).”No one likes to be micromanaged, but it’s easy to fall into the trap. It’s one of the fastest ways to frustrate someone on the job and stymie their creativity.Managers don’t have all the answers. Their teams bring unique perspectives and are doing the actual work, which makes them more likely to create the best solutions. This shouldn’t be surprising, but it’s often overlooked.That’s why in Dan Pink’s awesome TED talk on The Puzzle of Motivation the first of his 3 keys is: Autonomy. (And the others, Mastery and Purpose, translate well to questions about Goals and Communication below)If you trust your people to do great work and find the best solutions, they may just surprise you.You don’t even need to have high tech workers to do it, either. Toyota’s Total Production System is built on the premise that anyone on the assembly line can and should stop the system if there’s a problem. They also train managers to solicit feedback and ideas for improvements from each team member. This doesn’t happen unless you value autonomy and avoid micromanagement.3) “My manager shows consideration for me as a person.”The stereotype of a bad manager is a self-absorbed leader that doesn’t care for anyone else (see Dilbert, The Office, or Office Space). This is not the kind of person people are inspired to work hard for.Instead, the kinds of managers that have the most effective teams, start out by building a foundation of rapport and trust with their teams.Just as Google found it a key to their best managers, external data backs this up. In Gallup’s State of the American Manager report, they found that this kind of appreciation for your people was a huge driver of engagement:Work is such a large part of your life. It’s too short to work for people that don’t care for you as a person. Checking whether people perceive their manager cares about them is a good way to demonstrate it’s an important trait for managers to exemplify at your company.4) “My manager regularly shares relevant information from his/her manager and senior leadership.”Especially as your company grows, it can be easy for people to feel out of the loop and disconnected. Managers are often in more meetings and have access to more information that helps them understand what’s going on and the impact of their work. Since their team isn’t in all of those meetings, it’s important for that information to be communicated to them.Leaders have to be deliberate about this because otherwise there are dire consequences as Ben Horowitz, former CEO of Opsware and VC at A16Z, has written:“Perhaps the CEO’s most important operational responsibility is designing and implementing the communication architecture for her company. The architecture might include the organizational design, meetings, processes, email, yammer and even one-on-one meetings with managers and employees.Absent a well-designed communication architecture, information and ideas will stagnate and your company will degenerate into a bad place to work.”A CEO can’t talk with everyone all the time, so they need the help of their managers. This means your managers must become skilled at helping spread messages, often repeating themselves more than they feel necessary.The only way to be sure this is happening sufficiently is to ask what employees think with a question like this.5) “My manager has had a meaningful discussion with me about my career development in the past six months.”Here’s a common story I have heard from a number of friends:Joe joins a fast-growing, exciting company that brings new challenges and a better job title than he had before. He starts contributing right away.About 6-9 months in, he has his first performance review and one of the questions he’s asked is, “What are your career goals?”“Finally! I can tell them about my goals,” he thinks to himself.Joe proceeds to excitedly share his biggest goal with his manager who eagerly takes notes and enters it into the HR review system.And then nothing happens.A year later, his next review comes around and he’s asked again about his goals. Discouraged by the lack of progress, he says it’s the same as last year. Happy to get the review done faster, his manager copies his notes from the previous year.Realizing he’s not growing in his role, Joe starts interviewing for a new job that will provide the next growth opportunity for him. Meanwhile, HR can’t figure out why Joe stayed for less than 2 years.Over and over, I’ve heard from companies with highly rated cultures that they still lose their employees regularly. The common pattern? A lack of career progress has people looking elsewhere.Are your people growing?If they’re not, there’s a great risk that they will become bored, disengage and start looking for new jobs to find growth opportunities.Once again, Google was ahead of the curve realizing the value of talking about goals with your people. As Mary Meeker’s slides in one of her annual Internet Trends reports showed, the top benefit Millennials want at work is “Training and Development.”If you want to retain your people, you have to help them grow. It makes every person on your team more valuable to your company and less likely to look for work elsewhere.A person’s manager is best person to help them make progress on reaching their goals, so it’s important to ensure these conversations are happening.Growth discussions are about making progress. Make a plan and discuss it in your 1 on 1s. Lighthouse can help you easily do this for everyone on your team. You can sign up for a free trial of Lighthouse here.6) “I would recommend my manager to other Googlers.”The Net Promoter Score started as a survey question for marketers (“How likely are you to recommend Product X to a friend or colleague?”), and has become a question used for other great purposes as well.Many companies have started asking an Employee Promoter Score question to see how likely people would recommend working at the company. Google has taken this one step further to zoom in on each manager.Given that people quit managers, not companies, this makes a lot of sense. Gallup’s State of the American Manager report revealed that “50% of Americans have left a job to “get away from their manager at some point in their career,” and that a manager’s engagement directly impacts their team’s engagement:—While not everything Google’s management does can be brought to your company, these questions are well worth trying with your employees.Want to easily run a 180 survey like this for your team?Then sign up for a Manager Score. We purpose built Manager Score to measure how managers are doing on these same key areas like Google identified, and even gives you advice on how to improve in areas of weakness. Get your Manager Score now here.

People Like Us

I would highly recomend this company to ANYONE who is looking for a high quality video editor!

Justin Miller