2015-2016 Student Handbook With Student Code Of Conduct: Fill & Download for Free

GET FORM

Download the form

How to Edit and draw up 2015-2016 Student Handbook With Student Code Of Conduct Online

Read the following instructions to use CocoDoc to start editing and signing your 2015-2016 Student Handbook With Student Code Of Conduct:

  • To begin with, seek the “Get Form” button and tap it.
  • Wait until 2015-2016 Student Handbook With Student Code Of Conduct is ready to use.
  • Customize your document by using the toolbar on the top.
  • Download your finished form and share it as you needed.
Get Form

Download the form

The Easiest Editing Tool for Modifying 2015-2016 Student Handbook With Student Code Of Conduct on Your Way

Open Your 2015-2016 Student Handbook With Student Code Of Conduct with a Single Click

Get Form

Download the form

How to Edit Your PDF 2015-2016 Student Handbook With Student Code Of Conduct Online

Editing your form online is quite effortless. There is no need to get any software on your computer or phone to use this feature. CocoDoc offers an easy tool to edit your document directly through any web browser you use. The entire interface is well-organized.

Follow the step-by-step guide below to eidt your PDF files online:

  • Browse CocoDoc official website from any web browser of the device where you have your file.
  • Seek the ‘Edit PDF Online’ button and tap it.
  • Then you will open this free tool page. Just drag and drop the PDF, or choose the file through the ‘Choose File’ option.
  • Once the document is uploaded, you can edit it using the toolbar as you needed.
  • When the modification is completed, click on the ‘Download’ icon to save the file.

How to Edit 2015-2016 Student Handbook With Student Code Of Conduct on Windows

Windows is the most conventional operating system. However, Windows does not contain any default application that can directly edit PDF. In this case, you can get CocoDoc's desktop software for Windows, which can help you to work on documents effectively.

All you have to do is follow the steps below:

  • Install CocoDoc software from your Windows Store.
  • Open the software and then drag and drop your PDF document.
  • You can also drag and drop the PDF file from Google Drive.
  • After that, edit the document as you needed by using the various tools on the top.
  • Once done, you can now save the finished template to your device. You can also check more details about editing PDF documents.

How to Edit 2015-2016 Student Handbook With Student Code Of Conduct on Mac

macOS comes with a default feature - Preview, to open PDF files. Although Mac users can view PDF files and even mark text on it, it does not support editing. Utilizing CocoDoc, you can edit your document on Mac quickly.

Follow the effortless steps below to start editing:

  • Firstly, install CocoDoc desktop app on your Mac computer.
  • Then, drag and drop your PDF file through the app.
  • You can upload the PDF from any cloud storage, such as Dropbox, Google Drive, or OneDrive.
  • Edit, fill and sign your template by utilizing this amazing tool.
  • Lastly, download the PDF to save it on your device.

How to Edit PDF 2015-2016 Student Handbook With Student Code Of Conduct with G Suite

G Suite is a conventional Google's suite of intelligent apps, which is designed to make your job easier and increase collaboration within teams. Integrating CocoDoc's PDF editing tool with G Suite can help to accomplish work handily.

Here are the steps to do it:

  • Open Google WorkPlace Marketplace on your laptop.
  • Look for CocoDoc PDF Editor and install the add-on.
  • Upload the PDF that you want to edit and find CocoDoc PDF Editor by clicking "Open with" in Drive.
  • Edit and sign your template using the toolbar.
  • Save the finished PDF file on your cloud storage.

PDF Editor FAQ

Which one produces better results in fluid intelligence, Cogmed or Dual N Back?

Dual n-back is a working memory exercise which stress holding several items inmemory and quickly updating them; the study Jaeggi et al 2008 found that training dual n-back increases scores on an IQ test for healthy young adults. If this result were true and influenced underlying intelligence (with its many correlates such as higher income or educational achievement), it would be an unprecedented result of inestimable social value and practical impact, and so is worth investigating in detail. In my DNB FAQ, I discuss a list of post-2008 experiments investigating how much and whether practicing dual n-back can increase IQ; they conflict heavily, with some finding large gains and others finding gains which are not statistically-significant or no gain at all.What is one to make of these studies? When one has multiple quantitative studies going in both directions, one resorts to a meta-analysis: we pool the studies with their various sample sizes and effect sizes and get some overall answer - do a bunch of small positive studies outweigh a few big negative ones? Or vice versa? Or any mix thereof? Unfortunately, when I began compiling studies 2011-2013 no one has done one for n-back & IQ already; the existing study, Is Working Memory Training Effective? A Meta-Analytic Review (Melby-Lervåg & Hulme 2013), covers working memory in general, to summarize:However, a recent meta-analysis by Melby-Lervåg and Hulme (in press) indicates that even when considering published studies, few appropriately-powered empirical studies have found evidence for transfer from various WM training programs to fluid intelligence. Melby-Lervåg and Hulme reported that WM training showed evidence of transfer to verbal and spatial WM tasks (d = .79 and .52, respectively). When examining the effect of WM training on transfer to nonverbal abilities tests in 22 comparisons across 20 studies, they found an effect of d = .19. Critically, a moderator analysis showed that there was no effect (d = .00) in the 10 comparisons that used a treated control group, and there was a medium effect (d = .38) in the 12 comparisons that used an untreated control group.Similar results were found by the later WM meta-analysis Schwaighofer et al 2015 and Melby-Lervåg et al 2016. I’m not as interested in near WM transfer from n-back training - as the Melby-Lervåg & Hulme 2013 meta-analysis confirms, it surely does - but in the transfer with many more ramifications, transfer to IQ as measured by a matrix test. So in early 2012, I decided to start a meta-analysis of my own. My method & results differ from the later 2014 DNB meta-analysis by Au & Jaeggi et al (Bayesian re-analysis) and the broad Lampit et al 2014 meta-analysis of all computer-training.For background on conducting meta-analyses, I am using chapter 9 of part 2 of the Cochrane Collaboration’s Cochrane Handbook for Systematic Reviews of Interventions. For the actual statistical analysis, I am using the metafor package for the R language.LITERATURE SEARCHThe hard part of a meta-analysis is doing a thorough literature search, but the FAQ sections represent a de facto literature search. I started the DNB FAQ in March 2009 after reading through all available previous discussions, and have kept it up to date with the results of discussions on the DNB ML, along with alerts from Google Alerts, Google Scholar, and PubMed alerts. The corresponding authors of most of the candidate studies and Chandra Basak have been contacted with the initial list of candidate studies and asked for additional suggestions; none had a useful suggestion.In addition, in June 2013 I searched for "n-back" AND ("fluid intelligence" OR "IQ") in Google Scholar & PubMed. PubMed yielded no new studies; Scholar yielded Jaeggi’s 2005 thesis. Finally, I searched ProQuest Dissertations & Theses Full Text (master’s theses & doctoral dissertations) forn-backandn-back intelligence, which turned up no new works.DATAThe candidate studies:Jaeggi 2008Li et al 2008Excluded for using non-adaptive n-back and not administering an IQ post-test.Qiu 2009polar 2009Seidler 2010Stephenson 2010Jaeggi 2010Jaeggi 2011Chooi 2011Schweizer 2011Preece 2011Zhong 2011Jaušovec 2012Kundu et al 2012Salminen et al 2012Redick et al 2012Jaeggi 2012?Takeuchi et al 2012Rudebeck 2012Thompson et al 2013Vartanian 2013Heinzel et al 2013Smith et al 2013Nussbaumer et al 2013Oelhafen et al 2013Clouter 2013Sprenger et al 2013Jaeggi et al 2013Colom et al 2013Savage 2013Stepankova et al 2013Minear et al 2013Katz et al 2013Burki et al 2014Pugin et al 2014Schmiedek et al 2014Horvat 2014Heffernan 2014Hancock 2013Loosli et al 2015 (supplement):excluded because both groups received n-back training, the experimental difference being additional interference, with near-identical improvements to all measured tasks.Waris et al 2015Baniqued et al 2015Kuper & Karbach 2015Zając-Lamparska & Trempała 2016excluded: n-back experimental group was non-adaptive (single 1/2-back)Lindeløv et al 2016Schwarb et al 2015Studer-Luethi et al 2015excluded because did not report means/SDs for RPM scores - Minear et al 2016 - Tayeri et al 2016: excluded for being quasi-experimental - Studer-Luethi et al 2016: found no transfer to the Ravens; but did not report the summary statistics so can’t be includedVariables:active moderator variable: whether a control group was no-contact or trained on some other task.IQ type:BOMATRaven’s Advanced Progressive Matrices (RAPM)Raven’s Standard Progressive Matrices (SPM)other (eg. WAIS or WASI, Cattell’s Culture Fair Intelligence Test/CFIT, TONI)record speed of IQ test: minutes allotted (upper bound if more details are given; if no time limits, default to 30 minutes since no subjects take longer)n-back type:dual n-back (audio & visual modalities simultaneously)single n-back (visual modality)single n-back (audio modality)mixed n-back (eg audio or visual in each block, alternating or at random)paid: expected value of total payment in dollars, converted if necessary; if a paper does not mention payment or compensation, I assume 0 (likewise subjects receiving course credit or extra credit - so common in psychology studies that there must not be any effect), and if the rewards are of real but small value (eg. For each correct response, participants earned points that they could cash in for token prizes such as pencils or stickers.), I code as 1.country: what country the subjects are from/trained in (suggested by Au et al 2014)TABLEThe data from the surviving studies:yearstudyPublicationn.emean.esd.en.cmean.csd.cactivetrainingIQspeedN.backpaidcountry2008Jaeggi1.8Jaeggi14142.928812.132.588FALSE20001000Switzerland2008Jaeggi1.8Jaeggi14142.928712.861.46TRUE20001000Switzerland2008Jaeggi1.12Jaeggi1119.551.968118.733.409FALSE30011000Switzerland2008Jaeggi1.17Jaeggi1810.252.188881.604FALSE42511000Switzerland2008Jaeggi1.19Jaeggi1714.713.546813.883.643FALSE47512000Switzerland2009QiuQiu9132.13.2101305.3FALSE25022500China2009polarMarcek11332.761.83825.129.37FALSE36003000Czech2010Jaeggi2.1Jaeggi22113.673.1721.511.442.58FALSE370016191USA2010Jaeggi2.2Jaeggi22512.283.0921.511.442.58FALSE370016020USA2010Stephenson.1Stephenson1417.540.769.315.500.99TRUE40011000.44USA2010Stephenson.2Stephenson1417.540.768.614.080.65FALSE40011000.44USA2010Stephenson.3Stephenson14.515.340.909.315.500.99TRUE40011010.44USA2010Stephenson.4Stephenson14.515.340.908.614.080.65FALSE40011010.44USA2010Stephenson.5Stephenson12.515.320.839.315.500.99TRUE40011020.44USA2010Stephenson.6Stephenson12.515.320.838.614.080.65FALSE40011020.44USA2011Chooi.1.1Chooi4.512.721513.31.91TRUE24012000USA2011Chooi.1.2Chooi4.512.722211.32.59FALSE24012000USA2011Chooi.2.1Chooi6.512.12.811113.42.7TRUE60012000USA2011Chooi.2.2Chooi6.512.12.812311.92.64FALSE60012000USA2011Jaeggi3Jaeggi33216.944.753016.25.1TRUE28721011USA2011Kundu1Kundu13311.73330.34.51TRUE100014000USA2011SchweizerSchweizer2927.072.161626.54.5TRUE46323000UK2011Zhong.1.05dZhong17.621.381.718.821.852.6FALSE12513000China2011Zhong.1.05sZhong17.622.832.58.821.852.6FALSE12513000China2011Zhong.1.10dZhong17.622.212.38.8211.94FALSE25013000China2011Zhong.1.10sZhong17.623.121.838.8211.94FALSE25013000China2011Zhong.1.15dZhong17.624.121.838.823.781.48FALSE37513000China2011Zhong.1.15sZhong17.625.111.458.823.781.48FALSE37513000China2011Zhong.1.20dZhong17.623.061.488.823.381.56FALSE50013000China2011Zhong.1.20sZhong17.623.063.158.823.381.56FALSE50013000China2011Zhong.2.15sZhong18.56.890.9918.55.152.01FALSE37513000China2011Zhong.2.19sZhong18.56.721.0718.55.351.62FALSE47513000China2012JaušovecJaušovec1432.435.651529.26.34TRUE180018.300Slovenia2012Kundu2Kundu21110.812.32129.52.02TRUE100011000USA2012Redick.1Redick126.253.082063FALSE7001100204.3USA2012Redick.2Redick126.253.08296.243.34TRUE7001100204.3USA2012RudebeckRudebeck279.522.03287.752.53FALSE40001000UK2012SalminenSalminen1313.72.2910.94.3FALSE319120055Germany2012TakeuchiTakeuchi4131.90.42031.20.9FALSE27013000Japan2013ClouterClouter1830.844.111828.832.68TRUE400312.50115Canada2013ColomColom2837.256.232835.468.26FALSE7201200204Spain2013Heinzel.1Heinzel1524.532.91523.072.34FALSE54027.51129Germany2013Heinzel.2Heinzel15173.891515.873.13FALSE54027.51129Germany2013Jaeggi.4Jaeggi42514.962.713.514.742.8TRUE50013000USA2013Jaeggi.4Jaeggi42615.232.4413.514.742.8TRUE50013020USA2013OelhafenOelhafen2818.73.751519.94.7FALSE350045054Switzerland2013Smith.1Smith511.52.99911.91.58FALSE34011003.9UK2013Smith.2Smith511.52.992012.152.735TRUE34011003.9UK2013Sprenger.1Sprenger349.763.6818.59.953.42TRUE4101101100USA2013Sprenger.2Sprenger349.243.3418.59.953.42TRUE2051101100USA2013Thompson.1Thompson1013.20.671912.70.62FALSE8001250740USA2013Thompson.2Thompson1013.20.671913.30.5TRUE8001250740USA2013VartanianVartanian1711.182.531710.412.24TRUE6011010Canada2013SavageSavage2311.612.52711.212.5TRUE62512000Canada2013Stepankova.1Stepankova2020.253.7712.517.045.02FALSE250330129Czech2013Stepankova.2Stepankova2021.12.9512.517.045.02FALSE500330129Czech2013NussbaumerNussbaumer2913.692.542711.892.24TRUE45013000Switzerland2014Burki.1Burki1137.416.432035.957.55TRUE30013010Switzerland2014Burki.2Burki1137.416.432136.866.55FALSE30013010Switzerland2014Burki.3Burki1128.867.102031.206.67TRUE30023010Switzerland2014Burki.4Burki1128.867.102327.616.82FALSE30023010Switzerland2014PuginPugin1440.292.301541.331.97FALSE60033011Switzerland2014HorvatHorvat14485.6815477.49FALSE22524500Slovenia2014HeffernanHeffernan932.782.9110313.06TRUE4503200140Canada2013HancockHancock209.323.472010.444.35TRUE810130150USA2015WarisWaris1516.42.81615.93.0TRUE675330076Finland2015BaniquedBaniqued4210.1253.02754510.253.2824TRUE2401100700USA2015Kuper.1Kuper1823.66.1920.77.3FALSE150120145Germany2015Kuper.2Kuper1824.95.5920.77.3FALSE150120045Germany2016Lindeløv.1Lindeløv910.672.599.893.79TRUE26011030Denmark2016Lindeløv.2Lindeløv83.252.7194.111.36TRUE26011030Denmark2015Schwarb.1Schwarb260.112.5260.693.2FALSE480110380USA2015Schwarb.2.1Schwarb220.232.510.5-0.362.1FALSE480110180USA2015Schwarb.2.2Schwarb230.913.110.5-0.362.1FALSE480110280USA2016Heinzel.1Heinzel.21517.731.201416.431.00FALSE54027.511Germany2016Lawlor-SavageLawlor2711.482.983011.022.34TRUE39112000Canada2016Minear.1Minear15.524.14.526245.1TRUE4001151300USA2016Minear.2Minear15.524.14.53722.84.6TRUE4001151300USAANALYSISThe result of the meta-analysis:Random-Effects Model (k = 74; tau^2 estimator: REML)  tau^2 (estimated amount of total heterogeneity): 0.1103 (SE = 0.0424) tau (square root of estimated tau^2 value): 0.3321 I^2 (total heterogeneity / total variability): 43.83% H^2 (total variability / sampling variability): 1.78  Test for Heterogeneity: Q(df = 73) = 144.2388, p-val < .0001  Model Results:  estimate se zval pval ci.lb ci.ub  0.3460 0.0598 5.7836 <.0001 0.2288 0.4633 To depict the random-effects model in a more graphic form, we use theforest plot:forest(res1, slab = paste(dnb$study, dnb$year, sep = ", "))The overall effect is somewhat strong. But there seems to be substantial differences between studies: this heterogeneity may be what is showing up as a high τ2and i2; and indeed, if we look at the computed SMDs, we see one sample with d=2.59 (!) and some instances of d<0. The high heterogeneity means that the fixed-effects model is inappropriate, as clearly the studies are not all measuring the same effect, so we use a random-effects.The confidence interval excludes zero, so one might conclude that n-back does increase IQ scores. From a Bayesian standpoint, it’s worth pointing out that this is not nearly as conclusive as it seems, for several reasons:Published research can be very weak (see the statistics/methodology discussion in the DNB FAQ); meta-analyses are generally believed to be biased towards larger effects due to systematic biases like publication biasour prior that any particular intervention would increase the underlying genuine fluid intelligence is extremely small, as scores or hundreds of attempts to increase IQ over the past century have all eventually turned out to be failures, with few exceptions (eg pre-natal iodine or iron supplementation), so very strong evidence is necessary to conclude that a particular attempt is one of those extremely rare exceptions. As the saying goes, extraordinary claims require extraordinary evidence. David Hambrick explains it informally:…Yet I and many other intelligence researchers are skeptical of this research. Before anyone spends any more time and money looking for a quick and easy way to boost intelligence, it’s important to explain why we’re not sold on the idea…Does this [Jaeggi et al 2008] sound like an extraordinary claim? It should. There have been many attempts to demonstrate large, lasting gains in intelligence through educational interventions, with few successes. When gains in intelligence have been achieved, they have been modest and the result of many years of effort. For instance, in a University of North Carolina study known as the Abecedarian Early Intervention Project, children received an intensive educational intervention from infancy to age 5 designed to increase intelligence1. In follow-up tests, these children showed an advantage of six I.Q. points over a control group (and as adults, they were four times more likely to graduate from college). By contrast, the increase implied by the findings of the Jaeggi study was six I.Q. points after only six hours of training - an I.Q. point an hour. Though the Jaeggi results are intriguing, many researchers have failed to demonstrate statistically significant gains in intelligence using other, similar cognitive training programs, like Cogmed’s… We shouldn’t be surprised if extraordinary claims of quick gains in intelligence turn out to be wrong. Most extraordinary claims are.it’s not clear that just because IQ tests like Raven’s are valid and useful for measuring levels of intelligence, that an increase on the tests can be interpreted as an increase of intelligence; intelligence poses unique problems of its own in any attempt to show increases in the latent variable of gf rather than just the raw scores of tests (which can be, essentially, gamed). Haier 2014 analogizes claims of breakthrough IQ increases to the initial reports of cold fusion and comments:The basic misunderstanding is assuming that intelligence test scores are units of measurement like inches or liters or grams. They are not. Inches, liters and grams are ratio scales where zero means zero and 100 units are twice 50 units. Intelligence test scores estimate a construct using interval scales and have meaning only relative to other people of the same age and sex. People with high scores generally do better on a broad range of mental ability tests, but someone with an IQ score of 130 is not 30% smarter then someone with an IQ score of 100…This makes simple interpretation of intelligence test score changes impossible. Most recent studies that have claimed increases in intelligence after a cognitive training intervention rely on comparing an intelligence test score before the intervention to a second score after the intervention. If there is an average change score increase for the training group that is statistically significant (using a dependent t-test or similar statistical test), this is treated as evidence that intelligence has increased. This reasoning is correct if one is measuring ratio scales like inches, liters or grams before and after some intervention (assuming suitable and reliable instruments like rulers to avoid erroneous Cold Fusion-like conclusions that apparently were based on faulty heat measurement); it is not correct for intelligence test scores on interval scales that only estimate a relative rank order rather than measure the construct of intelligence….Studies that use a single test to estimate intelligence before and after an intervention are using less reliable and more variable scores (bigger standard errors) than studies that combine scores from a battery of tests….Speaking about science, Carl Sagan observed that extraordinary claims require extraordinary evidence. So far, we do not have it for claims about increasing intelligence after cognitive training or, for that matter, any other manipulation or treatment, including early childhood education. Small statistically significant changes in test scores may be important observations about attention or memory or some other elemental cognitive variable or a specific mental ability assessed with a ratio scale like milliseconds, but they are not sufficient proof that general intelligence has changed.For statistical background on how one should be measuring changes on a latent variable like intelligence and running intervention studies, see Cronbach & Furby 1970 & Moreau et al 2016; for examples of past IQ interventions which fadeout, see Protzko 2015; for examples of past IQ interventions which prove not to be on g when analyzed in a latent variable approach, see te Nijenhuis et al 2007, te Nijenhuis et al 2014, Nutley et al 2011, Shipstead et al 2012, Hayes et al 2014, Ritchie et al 2015, Estrada et al 2015 (and in a null, the composite score in Baniqued et al 2015).This skeptical attitude is relevant to our examination of moderators.MODERATORSCONTROL GROUPSA major criticism of n-back studies is that the effect is being manufactured by the methodological problem of some studies using a no-contact or passive control group rather than an active control group. (Passive controls know they received no intervention and that the researchers don’t expect them to do better on the post-test, which may reduce their efforts & lower their scores.)The review Morrison & Chein 20112 noted that no-contact control groups limited the validity of such studies, a criticism that was echoed with greater force by Shipstead, Redick, & Engle 2012. The Melby-Lervåg & Hulme 2013 WM training meta-analysis then confirmed that use of no-contact controls inflated the effect size estimates3, similar to Zehdner et al 2009 results in the aged and Rapport et al 2013’s blind vs unblinded ratings in WM/executive training of ADHD; and consistent with the increase of d=0.2 across many kinds of psychological therapies which was found by Lipsey & Wilson 1993 (but inconsistent with the g=0.20 vs g=0.26 of Lampit et al 2014).So I wondered if this held true for the subset of n-back & IQ studies. (Age is an interesting moderator in Melby-Lervåg & Hulme 2013, but in the following DNB & IQ studies there is only 1 study involving children - all the others are adults or young adults.) Each study has been coded appropriately, and we can ask whether it matters:Mixed-Effects Model (k = 74; tau^2 estimator: REML)  tau^2 (estimated amount of residual heterogeneity): 0.0803 (SE = 0.0373) tau (square root of estimated tau^2 value): 0.2834 I^2 (residual heterogeneity / unaccounted variability): 36.14% H^2 (unaccounted variability / sampling variability): 1.57  Test for Residual Heterogeneity: QE(df = 72) = 129.2820, p-val < .0001  Test of Moderators (coefficient(s) 1,2): QM(df = 2) = 46.5977, p-val < .0001  Model Results:   estimate se zval pval ci.lb ci.ub factor(active)FALSE 0.4895 0.0738 6.6310 <.0001 0.3448 0.6342 factor(active)TRUE 0.1397 0.0862 1.6211 0.1050 -0.0292 0.3085 The active/control variable confirms the criticism: lack of active control groups is responsible for a large chunk of the overall effect, with the confidence intervals not overlapping. The effect with passive control groups is a medium-large d=0.5 while with active control groups, the IQ gains shrink to a small effect (whose 95% CI does not exclude d=0).We can see the difference by splitting a forest plot on passive vs active:The visibly different groups of passive then active studies, plotted on the same axisThis is damaging to the case that dual n-back increases intelligence, if it’s unclear if it even increases a particular test score. Not only do the better studies find a drastically smaller effect, they are not sufficiently powered to find such a small effect at all, even aggregated in a meta-analysis, with a power of ~11%, which is dismal indeed when compared to the usual benchmark of 80%, and leads to worries that even that is too high an estimate and that the active control studies are aberrant somehow in being subject to a winner’s curse or subject to other biases. (Because many studies used convenient passive control groups and the passive effect size is 3x larger, they in aggregate are well-powered at 82%; however, we already know they are skewed upwards, so we don’t care if we can detect a biased effect or not.) In particular, Boot et al 2013 argues that active control groups do not suffice to identify the true causal effect because the subjects in the active control group can still have different expectations than the experimental group, and the group’ differing awareness & expectations can cause differing performance on tests; they suggest recording expectancies (somewhat similar to Redick et al 2013), checking for a dose-response relationship (see the following section for whether dose-response exists for dual n-back/IQ), and using different experimental designs which actively manipulate subject expectations to identify how much effects are inflated by remaining placebo/expectancy effects.The active estimate of d=0.14 does allow us to estimate how many subjects a simple4 two-group experiment with an active control group would require in order for it to be well-powered (80%) to detect an effect: a total n of >1600 subjects (805 in each group).TRAINING TIMEJaeggi et al 2008 observed a dose-response to training, where those who trained the longest apparently improved the most. Ever since, this has been cited as a factor in what studies will observe gains or as an explanation why some studies did not see improvements - perhaps they just didn’t do enough training. metafor is able to look at the number of minutes subjects in each study trained for to see if there’s any obvious linear relationship: estimate se zval pval ci.lb ci.ub intrcpt 0.3961 0.1226 3.2299 0.0012 0.1558 0.6365 mods -0.0001 0.0002 -0.4640 0.6427 -0.0006 0.0004 The estimate of the relationship is that there is none at all: the estimated coefficient has a large p-value, and further, that coefficient is negative. This may seem initially implausible but if we graph the time spent training per study with the final (unweighted) effect size, we see why:plot(dnb$training, res1$yi)IQ TEST TIMESimilarly, Moody 2009 identified the 10 minute test-time orspeedingof the RAPM as a concern in whether far transfer actually happened; after collecting the allotted test time for the studies, we can likewise look for whether there is an inverse relationship (the more time given to subjects on the IQ test, the smaller their IQ gains): estimate se zval pval ci.lb ci.ub intrcpt 0.4197 0.1379 3.0435 0.0023 0.1494 0.6899 mods -0.0036 0.0061 -0.5874 0.5570 -0.0154 0.0083 A tiny slope which is also non-statistically-significant; graphing the (unweighted) studies suggests as much:plot(dnb$speed, res1$yi)TRAINING TYPEOne question of interest both for issues of validity and for effective training is whether the existing studies show larger effects for a particular kind of n-back training: dual (visual & audio; labeled 0) or single (visual; labeled 1) or single (audio; labeled 2)? If visual single n-back turns in the largest effects, that is troubling since it’s also the one most resembling a matrix IQ test. Checking against the 3 kinds of n-back training:Mixed-Effects Model (k = 74; tau^2 estimator: REML)  tau^2 (estimated amount of residual heterogeneity): 0.1029 (SE = 0.0421) tau (square root of estimated tau^2 value): 0.3208 I^2 (residual heterogeneity / unaccounted variability): 41.94% H^2 (unaccounted variability / sampling variability): 1.72  Test for Residual Heterogeneity: QE(df = 70) = 135.5275, p-val < .0001  Test of Moderators (coefficient(s) 1,2,3,4): QM(df = 4) = 39.1393, p-val < .0001  Model Results:   estimate se zval pval ci.lb ci.ub factor(N.back)0 0.4219 0.0747 5.6454 <.0001 0.2754 0.5684 factor(N.back)1 0.2300 0.1102 2.0876 0.0368 0.0141 0.4459 factor(N.back)2 0.4255 0.2586 1.6458 0.0998 -0.0812 0.9323 factor(N.back)3 -0.1325 0.2946 -0.4497 0.6529 -0.7099 0.4449 There are not enough studies using the other kinds of n-back to say anything conclusive other than there seem to be differences, but it’s interesting that single visual n-back has weaker results so far.PAYMENT/EXTRINSIC MOTIVATIONIn a 2013 talk, Brain Training: Current Challenges and Potential Resolutions, with Susanne Jaeggi, PhD, Jaeggi suggestsExtrinsic reward can undermine people’s intrinsic motivation. If extrinsic reward is crucial, then its influence should be visible in our data.I investigated payment as a moderator. Payment seems to actually be quite rare in n-back studies (in part because it’s so common in psychology to just recruit students with course credit or extra credit), and so the result is that as a moderator payment is currently a small and non-statistically-significant negative effect, whether you regress on the total payment amount or treat it as a boolean variable. More interestingly, it seems that the negative sign is being driven by payment being associated with higher-quality studies using active control groups, because when you look at the interaction, payment in a study with an active control group actually flips sign to being positive again (correlating with a bigger effect size).More specifically, if we check payment as a binary variable, we get a decrease which is statistically-significant: estimate se zval pval ci.lb ci.ub intrcpt 0.4514 0.0776 5.8168 <.0001 0.2993 0.6035 as.logical(paid)TRUE -0.2424 0.1164 -2.0828 0.0373 -0.4706 -0.0143 If we instead regress against the total payment size (logically, larger payments would discourage participants more), the effect of each additional dollar is very small and 0 is far from excluded as the coefficient: estimate se zval pval ci.lb ci.ub intrcpt 0.3753 0.0647 5.7976 <.0001 0.2484 0.5022 paid -0.0004 0.0004 -1.1633 0.2447 -0.0012 0.0003 Why would treating payment as a binary category yield a major result when there is only a small slope within the paid studies? It would be odd if n-back could achieve the holy grail of increasing intelligence, but the effect vanishes immediately whether you pay subjects anything, whether $1 or $1000.As I’ve mentioned before, the difference in effect size between active and passive control groups is quite striking, and I noticed that eg. the Redick et al 2012 experiment paid subjects a lot of money to put up with all its tests and ensure subject retention & Thompson et al 2013 paid a lot to put up with the fMRI machine and long training sessions, and likewise with Oelhafen et al 2013 and Baniqued et al 2015 etc; so what happens if we look for an interaction? estimate se zval pval ci.lb ci.ub intrcpt 0.6244 0.0971 6.4309 <.0001 0.4341 0.8147 activeTRUE -0.4013 0.1468 -2.7342 0.0063 -0.6890 -0.1136 as.logical(paid)TRUE -0.2977 0.1427 -2.0860 0.0370 -0.5774 -0.0180 activeTRUE:as.logical(paid)TRUE 0.1039 0.2194 0.4737 0.6357 -0.3262 0.5340 Active control groups cuts the observed effect of n-back by more than half, as before, and payment increases the effect size, but then in studies which use active control groups and also pays subjects, the effect size increases slightly again with payment size, which seems a little curious if we buy the story about extrinsic motivation crowding out intrinsic and defeating any gains.BIASESN-back has been presented in some popular & academic medias in an entirely uncritical & positive light: ignoring the overwhelming failure of intelligence interventions in the past, not citing the failures to replicate, and giving short schrift to the criticisms which have been made. (Examples include the NYT, WSJ, Scientific American, & Nisbett et al 2012.) One researcher told me that a reviewer savaged their work, asserting that n-back works and thus their null result meant only that they did something wrong. So it’s worth investigating, to the extent we can, whether there is a publication bias towards publishing only positive results.20-odd studies (some quite small) is considered medium-sized for a meta-analysis, but that many does permit us to generate funnel plots , or check for possible publication bias via the trim-and-fill method.FUNNEL PLOTtest for funnel plot asymmetry: z = 3.0010, p = 0.0027 The asymmetry has reached statistical-significance, so let’s visualize it:funnel(res1)This looks reasonably good, although we see that studies are crowding the edges of the funnel. We know that the studies with active control groups show twice the effect-size of the passive control groups, is this related? If we plot the residual left after correcting for active vs passive, the funnel plot improves a lot (Stephenson remains an outlier):Mixed-effects plot of standard error versus effect size after moderator correction.TRIM-AND-FILLThe trim-and-fill estimate:Estimated number of missing studies on the left side: 0 (SE = 4.8908) Graphing it:funnel(tf)Overall, the results suggest that this particular (comprehensive) collection of DNB studies does not suffer from serious publication bias after taking in account the active/passive moderator.NOTESGoing through them, I must note:Jaeggi 2008: group-level data provided by Jaeggi to Redick for Redick et al 2013; the 8-session group included both active & passive controls, so experimental DNB group was split in half. IQ test time is based on the description in Redick et al 2012:In addition, the 19-session groups were 20 min to complete BOMAT, whereas the 12- and 17-session groups received only 10 min (S. M. Jaeggi, personal communication, May 25, 2011). As shown in Figure 2, the use of the short time limit in the 12- and 17-session studies produced substantially lower scores than the 19-session study.polar: control, 2nd scores: 23,27,19,15,12,35,36,34; experiment, 2nd scores: 30,35,33,33,32,30,35,33,35,33,34,30,33Jaeggi 2010: used BOMAT scores; should I somehow pool RAPM with BOMAT? Control group split.Jaeggi 2011: used SPM (a Raven’s); should I somehow pool the TONI?Schweizer 2011: used the adjusted final scores as suggested by the authors due to potential pre-existing differences in their control & experimental groups:…This raises the possibility that the relative gains in Gf in the training versus control groups may be to some extent an artefact of baseline differences. However, the interactive effect of transfer as a function of group remained [statistically-]significant even after more closely matching the training and control groups for pre-training RPM scores (by removing the highest scoring controls) F(1, 30) = 3.66, P = 0.032, gp2 = 0.10. The adjusted means (standard deviations) for the control and training groups were now 27.20 (1.93), 26.63 (2.60) at pre-training (t(43) = 1.29, P.0.05) and 26.50 (4.50), 27.07 (2.16) at post-training, respectively.Stephenson data from pg79/95; means are post-scores on Raven’s. I am omitting Stephenson scores on WASI, Cattell’s Culture Fair Test, & BETA III Matrix Reasoning subset because metafor does not support multivariate meta-analyses and including them as separate studies would be statistically illegitimate. The active and passive control groups were split into thirds over each of the 3 n-back training regimens, and each training regimen split in half over the active & passive controls.The splitting is worth discussion. Some of these studies have multiple experimental groups, control groups, or both. A criticism of early studies was the use of no-contact control groups - the control groups did nothing except be tested twice, and it was suggested that the experimental group gains might be in part solely because they are doing a task, any task, and the control group should be doing some non-WM task as well. The WM meta-analysis Melby-Lervåg & Hulme 2013 checked for this and found that use of no-contact control groups led to a much larger estimate of effect size than studies which did use an active control. When trying to incorporate such a multi-part experiment, one cannot just copy controls as the Cochrane Handbook points out:One approach that must be avoided is simply to enter several comparisons into the meta-analysis when these have one or more intervention groups in common. This double-counts the participants in the shared intervention group(s), and creates a unit-of-analysis error due to the unaddressed correlation between the estimated intervention effects from multiple comparisons (see Chapter 9, Section 9.3).Just dropping one control or experimental group weakens the meta-analysis, and may bias it as well if not done systematically. I have used one of its suggested approaches which accepts some additional error in exchange for greater power in checking this possible active versus no-contact distinction, in which we instead split the shared group:A further possibility is to include each pair-wise comparison separately, but with shared intervention groups divided out approximately evenly among the comparisons. For example, if a trial compares 121 patients receiving acupuncture with 124 patients receiving sham acupuncture and 117 patients receiving no acupuncture, then two comparisons (of, say, 61 acupuncture against 124 sham acupuncture, and of 60 acupuncture against 117 no intervention) might be entered into the meta-analysis. For dichotomous outcomes, both the number of events and the total number of patients would be divided up. For continuous outcomes, only the total number of participants would be divided up and the means and standard deviations left unchanged. This method only partially overcomes the unit-of-analysis error (because the resulting comparisons remain correlated) so is not generally recommended. A potential advantage of this approach, however, would be that approximate investigations of heterogeneity across intervention arms are possible (for example, in the case of the example here, the difference between using sham acupuncture and no intervention as a control group).Chooi: the relevant table was provided in private communication; I split each experimental group in half to pair it up with the active and passive control groups which trained the same number of daysTakeuchi et al 2012: subjects were trained on 3 WM tasks in addition to DNB for 27 days, 30-60 minutes; RAPM scores used, BOMAT & Tanaka B-type intelligence test scores omittedJaušovec 2012: IQ test time was calculated based on the descriptionUsed were 50 test items - 25 easy (Advanced Progressive Matrices Set I - 12 items and the B Set of the Colored Progressive Matrices), and 25 difficult items (Advanced Progressive Matrices Set II, items 12-36). Participants saw a figural matrix with the lower right entry missing. They had to determine which of the four options fitted into the missing space. The tasks were presented on a computer screen (positioned about 80-100 cm in front of the respondent), at fixed 10 or 14 s interstimulus intervals. They were exposed for 6 s (easy) or 10 s (difficult) following a 2-s interval, when a cross was presented. During this time the participants were instructed to press a button on a response pad (1-4) which indicated their answer.25×(6+2)+25×(10+2)60=8.33[math]\frac{25 \times (6+2) + 25 \times (10+2)}{60} = 8.33[/math] minutes.Zhong 2011: dual attention channel task omitted, dual and single n-back scores kept unpooled and controls split across the 2; I thank Emile Kroger for his translations of key parts of the thesis. Unable to get whether IQ test was administered speeded. Zhong 2011 appears to have replicated Jaeggi 2008’s training time.Jonasson 2011 omitted for lacking any measure of IQPreece 2011 omitted; only the Figure Weights subtest from the WAIS was reported, but RAPM scores were taken and published in the inaccessible Palmer 2011Kundu et al 2011 and Kundu 2012 have been split into 2 experiments based on the raw data provided to me by Kundu: the smaller one using the full RAPM 36-matrix 40-minute test, and the larger an 18-matrix 10-minute test. (Kundu 2012 subsumes 2011, but the procedure was changed partway on Jaeggi’s advice, so they are separate results.) The final results were reported in Strengthened effective connectivity underlies transfer of working memory training to tests of short-term memory and attention, Kundu et al 2013.Redick et al: n-back split over passive control & active control (visual search) RAPM post scores (omitted SPM and Cattell Culture-Fair Test)Vartanian 2013: short n-back intervention not adaptive; I did not specify in advance that the n-back interventions had to be adaptive (possibly some of the others were not) and subjects trained for <50 minutes, so the lack of adaptiveness may not have mattered.Heinzel et al 2013 mentions conducting a pilot study; I contacted Heinzel and no measures like Raven’s were taken in it. The main study used both SPM and also the Figural Relations subtest of a German intelligence test (LPS); as usual, I drop alternatives in favor of the more common test.Thompson et al 2013; used RAPM rather than WAIS; treated the multiple object tracking/MOT as an active control group since it did not statistically-significantly improve RAPM scoresSmith et al 2013; 4 groups. Consistent with all the other studies, I have ignored the post-post-tests (a 4-week followup). To deal with the 4 groups, I have combined the Brain Age & strategy game groups into a single active control group, and then split the dual n-back group in half over the original passive control group and the new active control group.Jaeggi 2005: Jaeggi et al 2008 is not clear about the source of its 4 experiments, but one of them seems to be experiment 7 from Jaeggi 2005, so I omit experiment 7 to avoid any double-counting, and only use experiment 6.Oelhafen 2013: merged the lure and non-lure dual n-back groupsSprenger 2013: split the active control group over the n-back+Floop group and the combo group; training time refers solely to time spent on n-back and not the other tasksJaeggi et al 2013: administered the RAPM, Cattell’s Culture Fair Test / CFT, & BOMAT; in keeping with all previous choices, I used the RAPM data; the active control group is split over the two kinds of n-back training groups. This was previously included in the meta-analysis as Jaeggi4 based on the poster but deleted once it wa formally published as Jaeggi et al 2013.Clouter 2013: means & standard deviations, payment amount, and training time were provided by him; student participants could be paid in credit points as well as money, so to get $115, I combined the base payment of $75 with the no-credit-points option of another $40 (rather than try to assign any monetary value to credit points or figure out an average payment)Colom 2013: the experiment group was trained with 2 weeks of visual single n-back, then 2 weeks of auditory n-back, then 2 weeks of dual n-back; since the IQ tests were simply pre/post it’s impossible to break out the training gains separately, so I coded the n-back type as dual n-back since visual+auditory single n-back = dual n-back, and they finished with dual n-back. Colom administered 3 IQ tests - RAPM, DAT-AR, & PMA-R; as usual, I used RAPM.Savage 2013: administered RAPM & CCFT; as usual, only used RAPMStepankova et al 2013: administered the Block Design (BD) & Matrix Reasoning (MR) nonverbal subtests of the WAIS-IIINussbaumer et al 2013: administered RAPM & I-S-T 2000 R tests; participants were trained in 3 conditions: non-adaptive single 1-back (low); non-adaptive single 3-back (medium); adaptive dual n-back (high). Given the low training time, I decided to drop the medium group as being unclear whether the intervention is doing anything, and treat the high group as the experimental group vs a low active control group.Burki et al 2014: split experimental groups across the passive & active controls; young and old groups were left unpooled because they used RAPM and RSPM respectivelyPugin et al 2014: used the TONI-IV IQ test from the post-test, but not the followup scores; the paper reports the age-adjusted scaled values, but Fiona Pugin provided me the raw TONI-IV scoresSchmiedek et al 2014: Younger Adults Show Long-Term Effects of Cognitive Training on Broad Cognitive Abilities Over 2 Years/Hundred Days of Cognitive Training Enhance Broad Cognitive Abilities in Adulthood: Findings from the COGITO Study had subjects practice on 12 different tasks, one of which was single (spatial) n-back, but it was not adaptive (difficulty levels for the EM and WM tasks were individualized using different presentation times (PT) based on pre-test performance); due to the lack of adaptiveness and the 11 other tasks participants trained, I am omitting their data.Horvat 2014: I thank Sergei & Google Translate for helping with extracting relevant details from the body of the thesis, which is written in Slovenian. The training time was 20-25 minutes in 10 sessions or 225 minutes total. The SPM test scores can be found on pg57, Table 4; the non-speeding of the SPM is discussed on pg44; the estimate of $0 in compensation is based on the absence of references to the local currency (euros), the citation on pg32-33 of Jaeggi’s theories on payment blocking transfer due to intrinsic vs extrinsic motivation, and the general rarity of paying young subjects like the 13-15yos used by Horvat.Baniqued et al 2015: note that total compensation is twice as high as one would estimate from the training time times hourly pay; see supplementary. They administered several measures of Gf, and as usual I have extracted only the one closest to being a matrix test and probably most g-loaded, which is the matrix test they administered. That particular test is based on the RAPM, so it is coded as RAPM. The full training involved 6 tasks, one of which was DNB; the training time is coded as just the time spent on DNB (ie the total training time divided by 6). Means & SDs of post-test matrix scores were extracted from the raw data provided by the authors.Kuper & Karbach 2015: control group splitSchwarb et al 2015: reports 2 experiments, both of whose RAPM data is reported as change scores (the average test-retest gain & the SDs of the paired differences); the Cochrane Handbook argues that change scores can be included as-is in a meta-analysis using post-test variables as the difference between the post-tests of controls/experimentals will become equivalent to change scores.The second experiment has 3 groups: a passive control group, a visual n-back, and an auditory n-back. The control group is split.Heinzel et al 2016 does not specify how much participants were paidLawlor-Savage & Goghari 2016 recorded post-tests for both RAPM & CCFT; I use RAPM as usualMinear et al 2016: two active control groups (Starcraft and non-adaptive n-back), split control group. They also administered two subtests of the ETS Kit of Factor-Referenced Tests, the RPM, and Cattell Culture Fair Tests, so I use the RPMThe following authors had their studies omitted and have been contacted for clarification:Seidler, Jaeggi et al 2010 (experimental: n=47; control: n=45) did not report means or standard deviationsPreece’s supervising researcherMinearKatzSOURCERun as R --slave --file=dnb.r | less:set.seed(7777) # for reproducible numbers # TODO: factor out common parts of `png` (& make less square), and `rma` calls library(XML) dnb <- readHTMLTable(colClasses = c("integer", "character", "factor",  "numeric", "numeric", "numeric", "numeric", "numeric", "numeric",  "logical", "integer", "factor", "integer", "factor", "integer", "factor"), "/tmp/burl8109K_P.html")[[1]] # install.packages("metafor") # if not installed library(metafor)  cat("Basic random-effects meta-analysis of all studies:\n") res1 <- rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = dnb); res1  png(file="~/wiki/images/dnb/forest.png", width = 680, height = 800) forest(res1, slab = paste(dnb$study, dnb$year, sep = ", ")) invisible(dev.off())  cat("Random-effects with passive/active control groups moderator:\n") res0 <- rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = dnb,  mods = ~ factor(active) - 1); res0 cat("Power analysis of the passive control group sample, then the active:\n") with(dnb[dnb$active==0,],  power.t.test(n = mean(sum(n.c), sum(n.e)), delta=res0$b[1], sd = mean(c(sd.c, sd.e)))) with(dnb[dnb$active==1,],  power.t.test(n = mean(sum(n.c), sum(n.e)), delta=res0$b[2], sd = mean(c(sd.c, sd.e)))) cat("Calculate necessary sample size for active-control experiment of 80% power:") power.t.test(delta = res0$b[2], power=0.8)  png(file="~/wiki/images/dnb/forest-activevspassive.png", width = 750, height = 1100) par(mfrow=c(2,1), mar=c(1,4.5,1,0)) active <- dnb[dnb$active==TRUE,] passive <- dnb[dnb$active==FALSE,] forest(rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = passive),  order=order(passive$year), slab=paste(passive$study, passive$year, sep = ", "),  mlab="Studies with passive control groups") forest(rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = active),  order=order(active$year), slab=paste(active$study, active$year, sep = ", "),  mlab="Studies with active control groups") invisible(dev.off())  cat("Random-effects, regressing against training time:\n") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c, data = dnb,  mods = training)  png(file="~/wiki/images/dnb/effectsizevstrainingtime.png", width = 580, height = 600) plot(dnb$training, res1$yi, xlab="Minutes spent n-backing", ylab="SMD") invisible(dev.off())  cat("Random-effects, regressing against administered speed of IQ tests:\n") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = dnb, mods=speed)  png(file="~/wiki/images/dnb/iqspeedversuseffect.png", width = 580, height = 600) plot(dnb$speed, res1$yi) invisible(dev.off())  cat("Random-effects, regressing against kind of n-back training:\n") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = dnb, mods=~factor(N.back)-1)  cat("*, payment as a binary moderator:\n") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c, data = dnb,  mods = ~ as.logical(paid)) cat("*, regressing against payment amount:\n") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c, data = dnb,  mods = ~ paid) cat("*, checking for interaction with higher experiment quality:\n") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c, data = dnb,  mods = ~ active * as.logical(paid))  cat("Test Au's claim about active control groups being a proxy for international differences:") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c, data = dnb,  mods = ~ active + I(country=="USA"))  cat("Look at all covariates together:") rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c, data = dnb,  mods = ~ active + I(country=="USA") + training + IQ + speed + N.back + paid + I(paid==0))  cat("Publication bias checks using funnel plots:\n") regtest(res1, model = "rma", predictor = "sei", ni = NULL)  png(file="~/wiki/images/dnb/funnel.png", width = 580, height = 600) funnel(res1) invisible(dev.off())  # If we plot the residual left after correcting for active vs passive, the funnel plot improves png(file="~/wiki/images/dnb/funnel-moderators.png", width = 580, height = 600) res2 <- rma(measure="SMD", m1i = mean.e, m2i = mean.c, sd1i = sd.e, sd2i = sd.c, n1i = n.e, n2i = n.c,  data = dnb, mods = ~ factor(active)-1 ) funnel(res2) invisible(dev.off())  cat("Little publication bias, but let's see trim-and-fill's suggestions anyway:\n") tf <- trimfill(res1); tf  png(file="~/wiki/images/dnb/funnel-trimfill.png", width = 580, height = 600) funnel(tf) invisible(dev.off())  # optimize the generated graphs by cropping whitespace & losslessly compressing them system(paste('cd ~/wiki/images/dnb/ &&',  'for f in *.png; do convert "$f" -crop',  '`nice convert "$f" -virtual-pixel edge -blur 0x5 -fuzz 10% -trim -format',  '\'%wx%h%O\' info:` +repage "$f"; done')) system("optipng -quiet -o9 -fix ~/wiki/images/dnb/*.png", ignore.stdout = TRUE) EXTERNAL LINKSHacker News discussionTo give an idea of how intensive, it cost ~$14,000 (2002) or $18,200 (2013) per child per year.↩from pg 54-55:An issue of great concern is that observed test score improvements may be achieved through various influences on the expectations or level of investment of participants, rather than on the intentionally targeted cognitive processes. One form of expectancy bias relates to the placebo effects observed in clinical drug studies. Simply the belief that training should have a positive influence on cognition may produce a measurable improvement on post-training performance. Participants may also be affected by the demand characteristics of the training study. Namely, in anticipation of the goals of the experiment, participants may put forth a greater effort in their performance during the post-training assessment. Finally, apparent training-related improvements may reflect differences in participants’ level of cognitive investment during the period of training. Since participants in the experimental group often engage in more mentally taxing activities, they may work harder during post-training assessments to assure the value of their earlier efforts.Even seemingly small differences between control and training groups may yield measurable differences in effort, expectancy, and investment, but these confounds are most problematic in studies that use no control group (Holmes et al., 2010; Mezzacappa & Buckner, 2010), or only a no-contact control group; a cohort of participants that completes the pre and post training assessments but has no contact with the lab in the interval between assessments. Comparison to a no-contact control group is a prevalent practice among studies reporting positive far transfer (Chein & Morrison, 2010; Jaeggi et al., 2008; Olesen et al., 2004; Schmiedek et al., 2010; Vogt et al., 2009). This approach allows experimenters to rule out simple test-retest improvements, but is potentially vulnerable to confounding due to expectancy effects. An alternative approach is to use a control training group, which matches the treatment group on time and effort invested, but is not expected to benefit from training (groups receiving control training are sometimes referred to as active control groups). For instance, in Persson and Reuter-Lorenz (2008), both trained and control subjects practiced a common set of memory tasks, but difficulty and level of interference were higher in the experimental group’s training. Similarly, control train- ing groups completing a non-adaptive form of training (Holmes et al., 2009; Klingberg et al., 2005) or receiving a smaller dose of training (one-third of the training trials as the experimental group, e.g., Klingberg et al., 2002) have been used as comparison groups in assessments of Cogmed variants. One recent study conducted in young children found no differences in performance gains demonstrated by a no-contact control group and a control group that completed a non-adaptive version of training, suggesting that the former approach may be adequate (Thorell et al., 2009). We note, however, that regardless of the control procedures used, not a single study conducted to date has simultaneously controlled motivation, commitment, and difficulty, nor has any study attempted to demonstrate explicitly (for instance through subject self-report) that the control subjects experienced a comparable degree of motivation or commitment, or had similar expectancies about the benefits of training↩Details about the treated (active) vs untreated (passive) differences in Melby-Lervåg & Hulme 2013:…This controls for apparently irrelevant aspects of the training that might nevertheless affect performance. In a review of educational research Clark and Sugrue (1991) estimated that such Hawthorne or expectancy effects account for up to 0.3 standard deviations improvement in many studies.The meta-analytic results:Verbal WM: d=0.99 vs 0.69Visuospatial WM: 0.63 vs 0.36Nonverbal abilities: 0 vs 0.38Stroop: 0.30 vs 0.35There was a significant difference in outcome between studies with treated controls and studies with only untreated controls. In fact, the studies with treated control groups had a mean effect size close to zero (notably, the 95% confidence intervals for untreated controls were d=-0.24 to 0.22, and for treated controls d=0.23 to 0.56). More specifically, several of the research groups demonstrated significant transfer effects to nonverbal ability when they used untreated control groups but did not replicate such effects when a treated control group was used (e.g., Jaeggi, Buschkuehl, Jonides, & Shah, 2011; Nutley, Söderqvist, Bryde, Thorell, Humphreys, & Klingberg, 2011). Similarly, the difference in outcome between randomized and nonrandomized studies was close to significance (p=.06), with the randomized studies giving a mean effect size that was close to zero. Notably, all the studies with untreated control groups are also nonrandomized; it is apparent from these analyses that the use of randomized designs with an alternative treatment control group are essential to give unambiguous evidence for training effects in this field.↩A more complicated analysis, including baseline performance and other covariates, would do better.↩

Are web devloper bootcamps worth it?

So you've completed a few free programming lessons online, and you've even written several working applications. Now all you have to do is complete one of those three-to-six-month coding bootcamps, and you'll be a professional developer, right? Not quite.In a recent review of bootcamps, TechBeacon found that 17 of 24 programs claimed that 90% or more of their students got full-time programming jobs or freelancing positions within six-to-12 months of graduation. But those numbers can be misleading.According to the Coding Bootcamp Market Sizing Report, the developer job market is flooded with bootcamp graduates, and that makes it hard for individual graduates to stand out. Negative bootcamp reviews show patterns of dissatisfaction with teachers,and volatility in the programs. And the "don't learn to code" backlash from the learn to code movement has also put a damper on the bootcamp party.Is a coding bootcamp the best option if you want to become a developer? That's been a hot topic of discussion recently.There are vast amounts of free and low-cost resources available to teach yourself programming online. Educating yourself and building a portfolio without a degree is absolutely doable, professional developers say. Before you jump into a bootcamp that will separate you from your hard-earned money, however, consider the caveats below. And then consider teaching yourself to code.Bootcamp placement numbers can be misleadingMany bootcamps claim or imply that you can become a professional developer in three weeks, 12 weeks, or perhaps six months when you take their courses.But most of these 90%+ job placement claims are largely unaudited. HackReactor, Turing School, and Lighthouse Labs are among the few that report student outcomes.Course Report, a site that hosts reviews and resources for coding bootcamps, has conducted student surveys (with 1000+ respondents from many reputable in-person bootcamps) for the past three years through its annual Alumni Outcomes & Demographics Study.2014 report claims that no more than 75% of graduates of coding bootcamps gained employment as developers after graduation. In 2015 that number dropped to 66%. For 2016 it jumped back up to 73%.Not all bootcamp attendees are starting from scratch. Some aren't there to get a developer job, and some students are already professional developers who are just trying to acquire new skills. While the study doesn't show us who went from "zero to developer," the surveys casts doubt on many programs' 90% job placement claims.Quality complaints are commonIt’s not hard to find a litany of litany of badbootcamp experiences online. You can find plenty of positive reviews as well, on sites such as Course Report, but people considering bootcamps may not hear as much about negative experiences. Graduates cite several reasons for this. For example, they may not want to devalue something on which they spent so much time and money, or they don't want to get into a confrontation with the bootcamp provider after posting a negative review.Many of the negative reviews that do get posted are criticisms of teachers. Basel Farag, a TechBeacon contributor and iOS developer with experience as a bootcamp mentor, admits that finding good teachers is hard. "You don't get paid much, so you have to really love doing it," he says. Although several schools have highly qualified, well-paid teachers, many bootcamps fill teaching assistant and mentor positions with less-experienced developers, says Farag.The practice of bootcamps hiring their own graduates as mentors immediately after graduation is widespread, Farag claims. Not only does that help to fill a shortage of teaching assistants, but it's also an easy way for bootcamps to improve job placement stats. "It's a very common practice," he says, it's nothing new, and it's not restricted to bootcamps. "We see law schools doing this all the time."Another concern is that, when working with inexperienced teachers who don't have a lot of time to spare, there is always a danger that your bootcamp experience could resemble this anonymous reviewer's story:"A few of our teachers hadn’t even been in tech longer than two years. Their teaching skills lacked and they got increasingly frustrated when students didn’t understand the material."Because of their lower pay, mentors need to take on additional students (if they're paid by the number of students they mentor, as they were in the bootcamp I attended) or work at a second job. This can cause some of the mentors to make themselves less available to students, or to provide low-quality feedback, as some online reviews claim.Get realistic about length of training timeBootcamp students who come into programs as beginners are not prepared for a development job when they graduate."It's possible that you might qualify for a junior developer or internship position after graduating from one of the more rigorous bootcamps," says Farag, "but it's going to be very hard to stand out from the increasing number of bootcamp graduates, and thousands of computer science graduates. You can't truly become a developer in three-to-six months."The problem comes when companies interview graduates and find that their programming skills aren’t fundamentally sound. Even though developer interviews have problems of their own, Farag says that a technical interviewer will eventually find out if you can't implement some of the most basic algorithms. Many coding bootcamps don't spend much time on algorithms. And many courses focus on learning tools rather than programming. Ken Mazaika, co-founder and CTO of the Firehose Project, an online coding bootcamp, also sees this trend."The good coding bootcamps out there will cover CS topics around algorithms and data structures, but 9 out of 10 coding bootcamps won’t cover these topics at all because these topics can be difficult to teach."Mazaika's view of the industry is particularly jaded, as the title of his 2015 post makes clear: The Dirty Little Secrets About The Worst Coding Bootcamps Out There: 9 out of 10 programs are outright scams.Many of the top coding bootcamps teach frameworks, such as Ruby on Rails, that favor convention over configuration. That is, students learn the usage conventions for a specific tool, but not the fundamentals of how web development actually works across tools and technologies.These frameworks give students just enough knowledge to start building simple web apps. After getting a handful of projects under their belts, many graduates believe they are ready to enter the job market. Unfortunately, they still lack a solid foundation.Bootcamp grads have flooded the marketThe probability of landing a junior development job after graduating from a bootcamp wash higher in 2013 than it is today because of an explosion of bootcamp graduates flooding the market, says Marcel Degas, QA Engineer at BlueRocket and General Assembly bootcamp graduate."With so many new coding bootcamps, and bootcamp grads hitting the job market over the past couple of years, finding a job as a junior software engineer in the Bay Area is not as easy as it once was," he says.What's more, hiring managers aren't as impressed by bootcamps as they were, says Ted Whang, a developer at Mazlo, and a 2014 coding bootcamp graduate. "You dropped everything in your life and dedicated three months straight to learning how to code? 'That’s amazing!' You won’t hear those kind words of praise anymore, except maybe from your mother," he says."The thing is… the more people can do something, that something becomes less impressive."A few years ago, bootcamp entrepreneurs saw an opportunity when they noticed a shortage of developers. They thought they could close the gap by creating coding bootcamp businesses that train people in basic development skills. But professional developers, even junior ones, need experience in many different aspects of programming to be effective software engineering professionals.If everyone could do it, there’d be no scarcity in the first place. Now hiring managers can choose from a large pool of programmer newbies straight out of coding bootcamps, but that doesn't solve the challenge of how to increase the number of highly qualified and experienced developers throughout the industry.The 'don’t learn to code' backlashWhen the learn to code movement arrived in 2012, the don’t learn to code movement followed. This blogging backlash by Farag, "Uncle Bob" Martin, and others might have seemed mean-spirited and egotistical, but some complaints about the programming profession raised legitimate concerns.John Kurkowski, a user experience (UX) engineer at CrowdStrike, says programming isn’t an inviting field because even the most mature technologies have been roughly cobbled together over the years, and developers often spend much of their time hacking together libraries that were never meant to be used together. Maybe in ten years, he says, developers will have tools and platforms that work more elegantly, and are easier to work with.But Mike Hadlow, a freelance C# developer with more than 20 years of software development experience, points out that software development is harder than people think. It's one of the few highly skilled occupations that requires no professional certification (although some believe it should), and it might just be the only highly skilled job where other workers in the industry give copious amounts of their free time and energy to help train people off the street.That free entry is both good and bad, because, as Martin, author of the Clean Code Handbook, points out, the industry usually doesn’t benefit from hoards of novices, but needs carefully trained individuals. He compares good developer training to a flight school, adding that not many bootcamps are that intense, nor require as many hours of training.Jeff Atwood, the co-founder of StackOverflow, perhaps sums it up best:“While I love that programming is an egalitarian field where degrees and certifications are irrelevant in the face of experience, you still gotta put in your ten thousand hours like the rest of us.”Ask yourself: Are you cut out for coding?You've felt that first sip of power that programming gives you. You finish your first program, then all of the syntax starts to make sense after you build a few more, and perhaps complete a course on Codecademy or Coursera. At that moment, you think: “I could do this for a living.”But at this stage of the game you still have no idea what you're doing. You haven’t stayed up until 2 AM three nights in a row trying to fix a bug or solve a problem. You haven’t had to spend the rest of your day sorting out version control issues and getting stuck going down multiple rabbit holes. You haven’t had your app stop working, even though you're sure that you didn't change anything.You need an extreme level of commitment and patience to work all the way up to an entry level developer position, and exponentially more for the rest of your career. "It was—and is—that persistence that allows me to stay in this field," says Farag.Going in, bootcamp students may not realize that computer science is actually a low-success educational field. And there’s plenty of evidence showing that computer science programs don’t have stellar graduation rates. Between 30 and 60 percent of first-year students in university computer science departments fail their first programming course. So why would anyone expect bootcamps to be significantly more successful?What's more, developers who get computer science degrees say that they are largely self-taught, according to the 2016 Stack Overflow Developer Survey. Even computer science departments can’t keep up with the rate of change in the industry. Developers can never stop learning.Need any more discouragement? A 2008 survey of nearly 900 developers on Stack Overflow revealed that, if your interest in programming didn't start between the ages of 8 and 18, your chances of being motivated enough to become a developer are low.It’s still possible to become a programmer in your early twenties, of course; it’s just a lot harder when most of your time is spent working to support yourself.All of this is why bad practices aren’t the only reason that coding bootcamps are failing to take many people from zero to developer in just a few months.Programming is fundamentally hard, and people who are considering these bootcamps should be honest with themselves as to their level of commitment to programming. Software engineering is not an easy way to get rich quick.If you really want to find out if software development is the right career path for you, ask yourself these questions:Am I willing to work hard for just the three months it takes to complete a bootcamp, or for the rest of my life? Even when it’s not my job? Even though I have to give up a lot of my leisure time in the early years of self-teaching?Am I able to get unstuck on problems without the help of a mentor? Am I motivated enough to never give up on those problems?Do I want to adopt programming as one of my main hobbies?If you can say yes to all of the above, then you should be able to surmount the obstacles to becoming a developer If you can say yes to all of the above, then you should be able to surmount the obstacles to becoming a developer without the help of a bootcamp. You also won’t fall flat, as many students do after attending a bootcamp, because the class was the only thing pushing you to keep working.Farag recommends coding bootcamps only to experienced developers looking to update their skills. For people who want to learn programming, he recommends community colleges (which can be much cheaper than bootcamps), or a four-year university program.DIY: Getting there without the bootcampThere are plenty of people out there who have nothing but good things to say about their bootcamp experiences, and some landed jobs a few months after completion. But with a little extra time and more awareness of the resources at their disposal, those people probably could have succeeded without investing thousands of dollars in a bootcamp, says Farag.Documentation for all of the open source tools, languages, and frameworks that bootcamps teach are available online. There are countless free online tutorials on just about any development technology that bootcamps will teach you. All you need to do is pick a technology and run a Google search. There's also this convenient list of links to several massively open online courses (MOOC’s) on programming.If you don't know exactly what to do, try building a new application every day. Jennifer Dewalt, the founder of Zube, did this. With each new project, she added to her portfolio and gained new skills. Quantity trumps quality when you’re learning. Just build things.You can also follow the steps in this great post about becoming a web developer from scratch, with no computer science degree. Low-cost coding lessons on Code School, Treehouse, NetTuts+, Udacity, Pluralsight, or Launch Academy are also a good option, and cost far less than does a bootcamp. And check out Codementor if you really need help getting unstuck, or need some learning advice.If you're meant to be a programmer, you won’t give up. You will get frustrated, but if you're determined, you'll keep trying. A bootcamp can't give you that motivation.

Have any American citizens ever been personally denied healthcare in the USA?

Yes, as an active duty military member during the period of this answer, I was covered by single-payer healthcare almost identical to the UK’s NHS system. The only real differences are that in the UK everyone is enrolled, but can opt out by paying private doctors, while in the active duty military system, only the active duty, retirees, and military dependents are enrolled. Also, active duty can't opt out: we're prohibited from procuring outside care due to military readiness concerns.In 2013, I had a tumor in my foot removed. When the fat pad didn’t grow back, I requested a fat graft to replace it, which is something done very frequently in plastic surgery centers (but usually so rich women can wear high heels more easily). Tricare denied me, so I appealed. The appeal took 1.5 years to maneuver the bureaucracy before I transferred across the country with it unapproved.Once I arrived on the other side of the country, I had to start all over. It took me 2 months to get an appointment at Langley with a podiatrist; he concurred with the request for a fat graft. The military medical system recaptured the request and made me see another podiatrist in Portsmouth, which took another month to get an appointment. He didn’t understand why I was sent there because Portsmouth isn’t experienced with fat grafts, and concurs that fat graft is the most conservative option. He requests a fat graft out in town, but Portsmouth Naval Hospital exercises their right of first refusal and makes me schedule an appointment with their Plastic Surgery clinic, which takes another month to get an appointment.When I see Portsmouth Naval Hospital Plastic Plastic Surgery, he also can't understand why I was sent there because Portsmouth Naval Hospital has zero experience with weight bearing fat grafts, but concurs that fat grafting is the most conservative option. He puts in a referral for a specific doctor who is experienced in weight bearing fat grafts. Tricare tries to refer me to Portsmouth Naval Hospital Podiatry again, but I fight back for a month and was able to make an evaluation appointment with the doctor (ironically, his only availability was on Veteran's Day, which is two months away from this time frame).Two months later I see the surgeon, who declares I’m a prime candidate for fat grafting, although the 2 years I’ve now had to wait has increased the risk of failure significantly.1 month later, Tricare marks the surgery request as received. Tricare refers me to Portsmouth Naval Hospital Podiatry for the surgery, and even to the specific doctor who told me he can’t do the surgery. Three days later, the surgery is denied as “not a covered procedure.”An O-5 in Portsmouth Plastic Surgery states via email that she "was told to instruct [me] to contact [my] congressman to help get this resolved. Please let us know if there is anything else you might need assistance with. Have a Happy Holiday Season." I call the supervisor of Patient Advocacy; he tells me that Tricare only approves procedures that have a large number of finished studies for that specific procedure addressing my specific condition, and that the DoD has given HealthNet sole authority to determine what is and is not covered. He wouldn't address my questions regarding what responsibility (if any?) Tricare bears in getting me healthy. He told me that filing for the Defense Health Agency waiver referred to in the letter was "worthless," as "in three years of being here, I've only seen it succeed once, and it was almost too late for the person who needed the lifesaving cancer treatment." He also told me that my only real recourse was to call my Congressional Representative(s).2 weeks later I’m able to get my PCM to write a referral to Walter Reed. Referral sits in limbo for 2 weeks. I also officially request a waiver for the fat graft procedure.At this point, it’s probably easier just to copy my journal notes into the answer so you can see what life is like for a someone in the military medical system:25Jan13 - Removed neuroma.22Mar13 - "mild erythema with continued fibrosis" - hydrocortisone injection.03May13 - "mild edema with acute tenderness to palpitation of the fibular sesamoid. We discussed possible capsulitis. Treatment today included a TPI with 5mg of Kenalog instilled into the symptomatic joint space." Dr. <redacted> discussed removal of the sesamoid bone; I requested a second opinion. Did not receive any response from Tricare on approving the request (even w/ significant followup from me) until 05Sep13.Sep13 - Went to see Dr. <redacted>, DPM, Oxnard, CA for second opinion. He recommended fat grafting into the area. I asked him to put in the referral request. Due to the poor communication skills of himself (limited English) and his staff (other reasons), I did not understand until 15Dec13 that he already knew that Tricare will not cover this treatment, and even if they did, there isn't a single plastic surgeon in Los Angeles or Ventura Counties that accepts Tricare.25Sep13 - MRI Right foot, Oxnard, CA: "ball of foot subcutaneous edema, consider changes related to altered weightbearing. A previously noted fluid signal structure about the first metatarsal is no longer evident."06Nov13 - I saw Mr. <redacted>, patient advocate at Port Hueneme Clinic. He was markedly unhelpful, essentially telling me to call Dr. <redacted> in Oxnard back.03Jan14 - Dr. <redacted>, PCM at Port Hueneme, CA specifically requests Tricare to "please authorize for surgical procedure to correct the loss of natural cushioning essential to prevent foot pain with walking or running."No action from Tricare, in spite of regular follow up, January through June of 2014.15Jun14 through 11Jul14 - Permanant Change of Station from California to VirginiaAug14 - I see Dr. <redacted> at in Hampton Roads who sends me to Langley Podiatry for consult.11Aug14 - I see Dr. <redacted> at Langley Podiatry. He takes an XRay and MRI. Xray information: Impression: 1. Bilateral pes planus. 2. Degenerative changes at the 1st metatarsophalangeal joint bilaterally. 3. Mild right hallux pelvis." MRI Information: "Findings: There is soft tissue distortion and blooming artifact at the base of the 1st MTP joint adjacent to the medial plantar sesamoid. This is most likely post surgical. The sesamoids themselves appear grossly unremarkable. Impressions: Postsurgical change at the plantar surface of the 1st MTP joint. Artifact is present here which limits visibility. No definite acute fracture or dislocation was seen. Edema in the 3rd interdigital space may be postsurgical. No soft tissue mass was identified." He tells me that there are two options - amputate sesamoid bone(s?) and hope for the best, or take the more conservative option and do a fat graft. He puts in a request for a fat graft out in town, but Portsmouth Naval Hospital exercises their right of first refusal and makes me schedule an appointment with their Podiatry clinic.03Sep14 - I see Portsmouth Naval Hospital Podiatry Dr. <redacted>, who can't understand why I was sent there at all, and concurs with Dr. <redacted from Langley> that fat graft is the most conservative option. He requests a fat graft out in town, but Portsmouth Naval Hospital exercises their right of first refusal and makes me schedule an appointment with their Plastic Surgery clinic. He does an Xray, which results in the following statements: "1. Mild hallux valgus deformity, 2. Small enthesophyte at the Achilles tendon insertion, 3. Flatfoot."25Sep14 - I see Portsmouth Naval Hospital Plastic Plastic Surgery Dr. <redacted>, who concurs with Dr. <redacted> and Dr. <redacted> from Langley and Portsmouth that a fat graft is the most conservative option, but can't understand why I was sent there at all since Portsmouth Naval Hospital has zero experience with weight bearing fat graft. He asks me what research I have done on my own. I tell him about Dr. <redacted> at the University of Pittsburgh Medical Center, who specializes in this treatment for foot injuries. He recognized the stature of both the Medical Center and Dr. <redacted> in this field once I mentioned the names and immediately requested a fat graft through UPMC. After fighting with Tricare over Portsmouth Naval Hospital exercising their right of first refusal again, I was able to make an appointment with Dr. <redacted> during his first available appointment - Veteran's Day 2014.11Nov14 - I fly to Pittsburgh and see Dr. <redacted> (a plastic surgeon) and his wife (a podiatrist). They tell me I am a perfect candidate for this procedure and put in a request for the fat grafting surgery.16Dec14 - After not hearing from Tricare I spend hours on the phone trying to get an update. They tell me they ignored the request (their words) because one number was missing in my identifier data from Pittsburgh. I provide the number and Tricare marks the surgery request as received. Portsmouth Naval Hospital exercises their right of first refusal again and an referral is automatically input for Portsmouth Podiatry. I call Tricare and after an hour on the phone got them to assess it internally.19Dec14 - Surgery denied by Tricare / Health Net. Reason given is "not a covered procedure." CDR <redacted> of Portsmouth Plastic Surgery stated that she "was told to instruct [me] to contact [my] congressman to help get this resolved. Please let us know if there is anything else you might need assistance with. Have a Happy Holiday Season." I call Mr. <redacted>, the supervisor of Patient Advocacy; he tells me that Tricare only approves procedures that have a large number of finished studies for that specific procedure addressing my specific condition, and that the DoD has given HealthNet sole authority to determine what is and is not covered. He wouldn't address my questions regarding what responsibility, if any, Tricare bears in getting me healthy. He was very forthcoming in advising me on filing for the Defense Health Agency waiver referred to in the letter: he said it was "worthless," since "in three years of being here, I've only seen it succeed once, and it was almost too late for the person who needed the lifesaving cancer treatment." Mr. <redacted>also told me that in his opinion, my only recourse is to call my Congressional Representative(s).22Dec14 - CDR <redacted>, Portsmouth Hospital Plastic Surgery: " I apologize for this inconvenience that you are going through. I called around and I was told that there should have been "appeal" instructions on the letter that you received. If not, I was told to instruct you to contact your congressman to help get this resolved. Please let us know if there is anything else you might need assistance with. Have a Happy Holiday Season."29Dec14 - My primary care manager, LT <redacted> writes referral to Walter Reed. Referral sits in limbo for 2 weeks. I also officially request a waiver through LT <redacted> for the fat graft procedure.15Jan15 - Portsmouth attempts to take the referral away from Walter Reed per right of first refusal. I spend an hour on the phone to get it reconsidered.22Jan15 - Podiatry clinic at Portsmouth approves transfer of referral to Walter Reed.26Jan15 - Walter Reed appointment line tells me that all National Capitol Region clinics are full until April and to call back on 30Jan15.30Jan15 - Walter Reed appointment offers appointment 37 days away . I ask about the 28 day Tricare standard of care for specialty appointments; the appointment desk tells me that if I want to inquire about the procedure for when the clinic cannot meet standards of care, I should leave a message with referral management and someone will call me back. I leave a message asking for a nurse to call me back so we can discuss a way forward to get my foot treated.04Feb15 - Nurse <redacted> at Walter Reed cancels my appointment without contacting me. The reason given in the notes was “Service member refuses available appointments.”06Feb15 - I call Walter Reed to check on the referral and am told the referral is canceled.09Feb15 - I speak to <redacted> in Patient Advocacy at Walter Reed who doesn't help until I tell her that I want to file an official complaint against Nurse <redacted>. She tells me that active duty never get appointments that meet the 28-day requirement and that I need to stop insisting on being seen within 28 days or I'll never be seen.11Feb15 - <redacted> calls me back and says my referral is reinstated, but I will have to wait until 13Feb15 to make an appointment.13Feb15 - First available appointment is 20Apr15. I make the appointment, and specifically ask whether they had the ability to perform fat grafts and/or Restylane injections, and the appointment line said someone would get back to me.02Mar15 - Mr. <redacted> at Portsmouth takes first official action on my waiver request of 29Dec14. He forwards it to the grievance coordinator, Ms. <redacted> and promises a phone call from her on 03Mar15.09Mar15 - No contact from Portsmouth. I call Mr. <redacted>, who promises Ms. <redacted> will call on 10Mar15.11Mar15 - Ms. <redacted> via email: "I wanted to follow-up with you regarding your request for the fat pad graft procedure and/or treatment. I have emailed both Dr <redacted> and Dr <redacted> requesting that they both chime in with my leadership so we can try and formulate a decision. I am waiting still and as soon as I have something to pass on, I will contact you."16Mar15 - Ms. <redacted> via email: "Your request is being discussed among leadership. Im waiting for confirmation on who will draft the request for waiver for DHA. As soon as I have a definitive decision to forward, rest assured I will."18Mar15 - Ms. <redacted> via email: "It is my understanding that the DHA waiver is being drafted by the Plastics clinic folks. Im standing by waiting further details."20Apr15 - Dr. <redacted> at Walter Reed walks in to my appointment and immediately states "I'm not sure why you're here. We don't do the kind of thing you're requesting here at Walter Reed." He couldn't answer me as to why Walter Reed accepted a referral for something they don't do and/or didn't call me to inform me that the appointment would be a waste of time. I mention to him that I requested information as to their ability to do the procedure and no one got back to me. He prescribed insole and recommended that I see a pain management specialist as well as a rheumatologist for my hip and knee pain <as of 2017 this still hasn’t been approved either>. I made an appointment with the PCM for Monday, 27Apr15 to get these referrals and discuss the way forward.I forwarded my concern to the Officer in Charge at <redacted> Clinic, LCDR <redacted>, about how I was referred to a clinic who can't do the procedure requested. His response was "My only suggestion is that you contact the Patient Relations Department for Walter Reed at (301) 295-0156 and voice your concerns."22Apr15 - Ms. <redacted> via email: "I am touching basis this morning with my Chain of Command as well as Health Benefits regarding the current referral concerns you are experiencing. Please allow me a little time this morning to reach out to a few of the folks here at Naval Medical Center Portsmouth regarding what is best needed at this juncture to better assist you."24Apr15 - Commanding Officer Portsmouth returns Waiver for more information. <redacted> at Patient Advocacy tells me he will keep me informed.29Apr15 - I discuss my situation with Maj. <redacted> at Walter Reed Podiatry, who states she will not authorize Walter Reed to assist me beyond providing orthotics.May15 - Dr. <redacted> at Walter Reed Podiatry convinces his chain of command to allow Ossatron and Stem Cell Therapy. I make the appointment for surgery.10Jun15 - Ossatron and Stem Cell Therapy surgery is conducted at Walter Reed. As of 15Jul16, this has not improved the situation.17Jul15 - I request an update on my Waiver Request from Mr. <redacted> at Portsmouth Patient Advocacy via email. No response. I request Physical Therapy through my doctor to address the continuing degeneration of my Hips and Knees due to the lack of treatment for my foot.31Jul15 - I request an update on my Waiver Request from Mr. <redacted> at Portsmouth Patient Advocacy via email. No response.17Jul15 - I request an update on my Waiver Request from Mr. <redacted> at Portsmouth Patient Advocacy via email. No response.03Aug15 - I request an update on my Waiver Request from Ms. <redacted> at Portsmouth Patient Advocacy via email. No response.17Aug15 - I request an update on my Waiver Request from Mr. <redacted> at Portsmouth Patient Advocacy via email. No response.19Aug15 - I request an update on my Waiver Request from Mr. <redacted> at Portsmouth Patient Advocacy via email. He emails me back and states, "this has gone up the chain to Navy Medicine East. Mr. <redacted> and Mrs. <redacted> are aware of you contacting me regarding this matter and Mr <redacted> is following up with NAVMEDEAST on the status. I will contact him again today and advise to contact you regarding this matter." No one contacts me.I never hear from Ms. <redacted> or Portsmouth Hospital Patient Advocacy again, even after repeated phone calls and leaving messages asking them to assist.25Aug15 - 14Sep15: Pool Physical Therapy at Fort Eustis. They have me "run" and jump in the water 2-3 times a week. It takes me up to 30 minutes to recover from the pain enough to drive after the therapy. I call it off after 6 weeks because I can't take the pain any more.11Sep15 - I request an update on my Waiver Request from Mr. <redacted> at Portsmouth Patient Advocacy via email. No response.24Sep15 - I call the Patient Advocacy desk and don't take "no" for an answer. I never am able to talk to anyone, but the front desk refers to CAPT <redacted> at Navy Medicine East. He tells me that the waiver has been sent back a few times for format errors and still has not left Portsmouth since I requested it in Dec14 and/or when it was drafted in Mar15.30Sep15 - I call Dr. <redacted> at Walter Reed and ask if there is anything to do since the stem cell treatment failed. He recommends another round of treatment.27Oct15 - CAPT <redacted> forwards waiver to BUMED. No response through the rest of 2015.15Jan16 - I contact Dr. <redacted> for another round of shockwave/stem cell therapy while I wait for fat grafting. He forwards the request to a Ms. <redacted> to set up the surgery.29Jan16 - No response from Ms. <redacted>. I call her and leave a message requesting for her to call me back to set up surgery.10Feb16 - I email CAPT <redacted> to request an update and find out he has retired. I spend most of the day trying to find out who has action. A LT <redacted> is able to find hard copy information and request an update the same day. No response.15Feb16 - No response from Ms. <redacted> on my stem cell surgery. I call her and leave another message requesting for her to call me back to set up surgery.15Mar16 - No response from Ms. <redacted> on my stem cell surgery. I call her and leave another message requesting for her to call me back to set up surgery.16Mar16 - Receive a response from BUMED contractor <redacted> who states that the waiver (initiated in 2014) was submitted to Defense Health in early March 2016. I inform her that I will be changing assignments in July and that I need surgery before then. I also identify a target date of the last week in June for surgery due to my PCS. She promised to update me by close of business on 17Mar16. The update never occurs.12Apr16 - I have not heard from <redacted> since 16Mar16. I request a response and update, and remind her of the target date of the last week in June for surgery due to my PCS. She says she is "still working on my case" and will update me on 15Apr16 by COB. The update never occurs.14Apr16 - LT <redacted> at Portsmouth transfers, turning over my case to LCDR <redacted>.13May16 - No updates from <redacted> or LCDR <redacted>. I email both. <redacted> leaves a message on my voicemail telling me she wants to talk to me, even though my voice message says I’m on leave.26May16 - I hear the email and respond to <redacted> via email asking if I can provide any information, and remind her of the target date of the last week in June for surgery due to my PCS. She says she doesn't need anything and is still working on my waiver, but provides no actual information.06Jun16 - I request an update from <redacted> via email, and remind her of the target date of the last week in June for surgery due to my PCS. No response.20Jun16 - I request an update from <redacted> via email, copying my boss, and remind her of the target date of the last week in June for surgery due to my PCS. Her response is "As discussed I have submitted all of your paperwork to the DHA for consideration of your waiver request. I will send you a status update this Friday (and every week on Friday as previously stated) via email."It is important to note that at this point, not only have I not received "every Friday" updates, but I have received no response at all to many emails, and no information beyond "still waiting" since March 2016.24Jun16 - At 4pm I ask <redacted> if I will get an update and ask when I should schedule travel and surgery. Her answer: "I inquired this week on the status of your case. As of today I have not received an approval/disapproval decision from the DHA. I have a meeting scheduled on Monday of next week to specifically discuss your waiver request. I hope to have an additional update for you on Monday following my meeting."Tuesday, 28Jun16 - <redacted> asks me for my Primary Care Manager's name with no explanation. I provide this information along with all of the Podiatrists and other doctors who have referred me for fat grafting. I also ask when I should schedule surgery, and remind her that I start MBA classes 08Jul16. I also tell her that due to the compressed MBA schedule, I have a single open week starting 08Aug16 that I'm available for surgery.****At this point I have now transferred again, away from a friendly unit who knows my community and my job and into a bureaucratic student unit****11Jul16 - No updates since June. Requested an update from <redacted> via email. No response.14Jul16 - Request update again from Ms. <redacted>.15Jul16 - Email from Ms. <redacted>: "Your PCM will need to request a referral for an evaluation and treatment (to Dr. <redacted> who does the surgery) and submit that to Health Net for approval/disapproval. Once we receive an approval/disapproval from Health Net we can move forward to:(1.) get the surgery scheduled and paid via Health Net or (2) resubmitting the SHCP waiver request to DHA (with the updated information from Health Net) to get the surgery scheduled and paid via the DHAAs discussed during our phone call, I will contact your PCM (Yorktown Clinic) and assist with the request for a referral. I will contact you on Monday if there are any additional updates. Please contact me if you have any questions."It is important to note that I received disapproval from Health Net on 19 December *2014*, and it is only due to the lack of action by Tricare that it has taken this long.18Jul16 - I go to Clinic <redacted> and can't find anyone who knows anything about my issue. They insist I make an appointment, which is backed up until early August. I ask Ms. <redacted> who she spoke to and she emails back that she can't remember but that she would get back to me by COB. LPN <redacted> at the clinic takes my information and promises to discuss with LCDR <redacted> (my PCM) and get back to me by COB. Neither update happens.19Jul16 - Ms. <redacted> emails that she remembers who she spoke to on 15Jul16: Ms. <redacted>, the health benefits coordinator, who evidently did not speak to my PCM team. Ms. <redacted> says that she will coordinate with my PCM team.20Jul16 - A different nurse from the PCM team at Yorktown calls and says that LCDR <redacted> is unwilling to put in the referral (see 15Jul16 above) without an appointment. She sets up an appointment for 22Jul16.22Jul16 - I arrive and LCDR <redacted> doesn't know very much about my case. I ask him what he needs to write a referral, and he tells me I will need to go to Portsmouth Podiatry for an assessment. I relay this information to Ms. <redacted>, who responds "Please allow me to do my job and work through the TRICARE Health Plan program requirements. I will follow up with you and provide you with an update by close of business today regarding referral."She later emails to me: "I spoke with Dr. <redacted> this morning after your visit and he is generating a referral for Dr. <redacted> for an evaluation and treatment. You cannot schedule an appointment until the referral has been approved and an authorization number has been issued. Once the referral authorization number has been issued the appointment with Dr. <redacted> can be scheduled. I will contact you today when I have a status update on the referral request. Please do not make any Podiatry appointments at this time."26Jul16 - I ask whether my unit will need to fund the travel and when I will know what my surgery date is, and Ms. <redacted> response is:"I did not state that any appointments or medical services would be funded due to the fact that an authorization had not been issued. I will be contacting Health Net Federal Services, TRICARE Regional Contractor for the North Region) to confirm if an authorization has been issued. If a referral authorization is issued then funding can be coordinated.**Once again please do not schedule any appointments or initiate any requests for funding at this time. I will provide you with an update no later than 1700 today."Ms. <redacted> then spends a lot of time trying to coordinate a phone conversation with her supervisor without responding to my requests for an actual date of surgery. At the end of the day, she tells me that she will try to coordinate a surgery consult in Pittsburgh for 06Aug16, and will be contacting me with an update by COB Wednesday, 27Jul16. No response until I email her on Friday.29Jul16 - I ask Ms. <redacted> what the status is since I didn't get an update on Wednesday as she had promised, and I need to know what's going on so that I can schedule travel. She emails me back the Tuesday email, implies that I'm being impatient, and says that she will update me by COB Monday, 01Aug16.—————————————————Cue 5 or more additional pages of similar non-effective medical treatment and you’ll understand why I cringe inside anytime I hear anyone say they want to “give the whole country access to the level of care the active duty have.”Edit in response to some questions:1) AHCA doesn’t apply to military Tricare, as it was not only exempted but Tricare is considered full coverage.2) One of the biggest misunderstandings about health insurance, not just in the US but worldwide, is that insurance = care. Charlie Gard’s parents are finding out that there isn’t an unlimited checkbook when it comes to medical care - even government care has limits.3) For military healthcare, only those treatments specifically listed in the care handbook are covered. These treatments have billing codes and rates assigned. Tricare isn’t really a medical treatment plan, it’s a reimbursement plan for those items in the book. If you have a problem that requires a treatment not in the book, there is no burden on Tricare to find a way to treat you, they simply shrug and say “it’s not in the book.” It’s on you to prove that the treatment you want has been studied and the studies must have been published in multiple medical journals. If that’s the case, and you can find them, you might be ok, but otherwise you’re SOL.4) Tricare only allows military doctors to address one problem at a time. Thus, when I go to the doctor to address my back, hips, and knees that have degenerated due to the way I walk after the foot tumor, they tell me I need to make separate appointments for each knee, each hip, and my back - there is no concept of holistic medicine in the military medical system, or at least not since I joined in the mid-90’s.5) I personally know at least 10 people who have been or are currently being medically discharged due to preventable permanent injuries sustained due to the many month wait times in the military. Many have ACL, MCL, Hip, Shoulder, and other injuries which could have been easily fixed but healed improperly while they waited. All of these people will be at least partially supported by the taxpayer for the rest of their life, but there is zero ability to hold anyone in the military accountable to improve the system.

View Our Customer Reviews

CocoDoc was the first company which gave me opportunity to work with my android phone as I wanted, without using google account, without being spied and still using all phones features which I wanted, not being told what’s the best for me.

Justin Miller