Periodic Trends Worksheet Answers: Fill & Download for Free

GET FORM

Download the form

How to Edit Your Periodic Trends Worksheet Answers Online On the Fly

Follow the step-by-step guide to get your Periodic Trends Worksheet Answers edited with ease:

  • Hit the Get Form button on this page.
  • You will go to our PDF editor.
  • Make some changes to your document, like highlighting, blackout, and other tools in the top toolbar.
  • Hit the Download button and download your all-set document into you local computer.
Get Form

Download the form

We Are Proud of Letting You Edit Periodic Trends Worksheet Answers Like Using Magics

Take a Look At Our Best PDF Editor for Periodic Trends Worksheet Answers

Get Form

Download the form

How to Edit Your Periodic Trends Worksheet Answers Online

If you need to sign a document, you may need to add text, complete the date, and do other editing. CocoDoc makes it very easy to edit your form just in your browser. Let's see how can you do this.

  • Hit the Get Form button on this page.
  • You will go to our PDF text editor.
  • When the editor appears, click the tool icon in the top toolbar to edit your form, like signing and erasing.
  • To add date, click the Date icon, hold and drag the generated date to the target place.
  • Change the default date by changing the default to another date in the box.
  • Click OK to save your edits and click the Download button once the form is ready.

How to Edit Text for Your Periodic Trends Worksheet Answers with Adobe DC on Windows

Adobe DC on Windows is a useful tool to edit your file on a PC. This is especially useful when you finish the job about file edit in your local environment. So, let'get started.

  • Click the Adobe DC app on Windows.
  • Find and click the Edit PDF tool.
  • Click the Select a File button and select a file from you computer.
  • Click a text box to adjust the text font, size, and other formats.
  • Select File > Save or File > Save As to confirm the edit to your Periodic Trends Worksheet Answers.

How to Edit Your Periodic Trends Worksheet Answers With Adobe Dc on Mac

  • Select a file on you computer and Open it with the Adobe DC for Mac.
  • Navigate to and click Edit PDF from the right position.
  • Edit your form as needed by selecting the tool from the top toolbar.
  • Click the Fill & Sign tool and select the Sign icon in the top toolbar to customize your signature in different ways.
  • Select File > Save to save the changed file.

How to Edit your Periodic Trends Worksheet Answers from G Suite with CocoDoc

Like using G Suite for your work to complete a form? You can make changes to you form in Google Drive with CocoDoc, so you can fill out your PDF just in your favorite workspace.

  • Go to Google Workspace Marketplace, search and install CocoDoc for Google Drive add-on.
  • Go to the Drive, find and right click the form and select Open With.
  • Select the CocoDoc PDF option, and allow your Google account to integrate into CocoDoc in the popup windows.
  • Choose the PDF Editor option to open the CocoDoc PDF editor.
  • Click the tool in the top toolbar to edit your Periodic Trends Worksheet Answers on the needed position, like signing and adding text.
  • Click the Download button to save your form.

PDF Editor FAQ

What type of questions should I expect on the Tableau Desktop specialist Exam?

Hey there! I’ve taken and passed the exam last month and I thought it was fairly straightforward if you used Tableau before (even for a small project will do). Even if you didn’t, the questions are not hard to prepare for. The overall exam consists of 30 multiple choice and “choose one or more true” questions. The questions are categorized into four groups:Connecting to and Preparing DataHow to load in data (excel and Tableau worksheets)Merging or blending datasets togetherExploring and Analyzing DataAnalyzing time trendsCustomer-level or other group-level analysisFiltering and making setsSharing InsightsHow to share results in TableauUsing a Dashboard and how you can change it to meet your analytical needsUnderstanding Tableau ConceptsUsing “Format” pane in Tableau (eg. how to bold or change the font of a title or axis label)How categorical or continuous data is representedShape, Color, Text “pills”FYI, the bullet points under each of the four categories do not cover all of the questions you will encounter. These are some of the most common questions that I had to answer…If you don’t know an answer to a question, you can actually Google or use the Tableau Help feature within Tableau to help you out. You can actually use both of these sources during the examination period.But, it’s always best to understand how to use Tableau. There are several free sources available to help you study for the exam. I’ve compiled a list of resources that I’ve used to prepare for the exam, and most of them are free if you are a student with a valid university email! This can be found here - ”Preparing for the Tableau Desktop Specialist Exam”Good luck!

What are the best stocks to buy for the long-term?

Here is a list of best stocks which one can buy with an objective of LONG TERM holding.In fact: I use this list for myself to unearth few good stocks from a heap of average ones.What is the strategy? How to identify best stocks trading in Indian stock market?How to start?Buying stocks of companies which has high sales, high net profit, or high dividend payout is not going to work.Not that these stocks are bad, but it also essential to do further checks.In this blog post, we will discuss what a long term investor must check in stocks before buying it.1. WHICH ARE BEST STOCKS?Best stocks are ones, which represent a “good business”, and are also available at “undervalued price” levels for investing.Good business: Which is good business? There can be several contributing factors, but what works best is ‘free cash flow’. Read about blue chip stocks.Undervalued Price: What is undervalued price? For this one must know the ‘intrinsic value’ of a stock. When market price less than its intrinsic value, stock is undervalued. Read about low PE stocks.A good business will always generate high free cash flows. High free cash flow will eventually lead to high intrinsic value. Check free cash flow based calculator.When intrinsic value is high, there are more chances to find it at undervalued price levels.So what is the takeaway from here? Look for stock with high free cash (FCF).See how good business builds its intrinsic value (use MS Excel to estimate intrinsic value)2. WHICH ARE UNDERVALUED STOCKS?Suppose a stock is trading at a market price of Rs.100/share. Upon estimation, its intrinsic value comes out to be Rs.120/share.As market Price is less than intrinsic value, stock is said to be undervalued.To identify best stocks, the essential ingredients are the following:Free cash Flow (FCF), andIntrinsic Value (IV). Read more on IV formula.How to identify best stock? FCF will help you to estimate IV.Then one can compare the current market price with its estimated intrinsic value to check undervaluation. Read more on undervalued stocks.3. COMPLICATION TO IDENTIFY BEST STOCKSIt is not possible to accurately identify best stocks without knowing their free cash flow and intrinsic value.Does this understanding make best stock picking simpler? Yes and No.Yes, because we now know what stock parameter must be looked at to pick best stocks. Otherwise we simply waste our time looking at less important stock metrics like financial ratios etc.No, because estimation of both ‘free cash flow’ and ‘intrinsic value’ is a special skill. Only gifted people can do it accurately.So how a common man, who knows nothing about stocks can identify best stock? It is a tough task, but I have a solution for it.4. THE ULTIMATE SOLUTIONWhy I’m calling it ultimate? Because the solution lies within us. How?Learn to estimate intrinsic value of stocks by self.From my experience, I can say three things about intrinsic value estimation:First: Estimating an approximate intrinsic value of stocks can be done by anyone. No special skill is necessary.Second: The more one practices estimating intrinsic value, the accuracy improves.Third: It is better to believe in the intrinsic value estimated by self, rather than buying stocks on others advice.I am sure these points are making sense, right?But some might say that estimating intrinsic value estimation is tough – how to learn it?This is where my stock analysis worksheet can be helpful. How? You can actually see for yourself the financial reports data being converted into intrinsic value.Reading this article, and using my excel worksheet can give huge clarity about intrinsic value estimation even to a novice.A few days of practice can clear a lot of cloud about intrinsic value.So lets process and try to learn how to estimate free cash flow and intrinsic value of stocks…5. WHAT BUILDS INTRINSIC VALUE?Before we get into the math part of intrinsic value, let’s understand what are the steps involved in estimation of intrinsic value.There are several methods of estimating intrinsic value of stocks. One of the most reliable method is discounted cash flow model (DCF). You can read this post to know more about it.But here what I will show you is a hybrid method of “dividend discount model’ and DCF.In this hybrid model, there are three steps which ultimates helps us to build the intrinsic value:Step #1 (FCFE): Calculate the present Free Cash Flow to Equity (FCFE).Step #2 (FCFE Growth): Forecast FCFE growth rate for next one year.Step #3 (Expected Return). Quantify your ‘expected return’ (say 5%, 8%, 12% etc).Step #4. Calculate intrinsic value.5.1 HOW TO ESTIMATE FREE CASH FLOW (FCFE)A stock must show a positive free cash flow (FCFE). If not, then its intrinsic value will also go in negative.Only if the FCFE is positive, the stock may stand a chance to become undervalued.How to estimate free cash flow? Free cash flow formula is like this:To estimate free cash flow, get the following values from the company’s financial reports:PAT: Open the ‘profit and loss account’. Note the numbers mentioned against ‘net profit after tax’.CAPEX: Open the ‘cash flow statement’. Go to ‘Cash flows from investing activities’. Note the numbers for ‘purchase and sale of capital assets’.D&A: Open the ‘profit and loss account’. Go to the section where all ‘expenses’ are listed. Note the numbers mentioned against ‘depreciation and amortisation’.Increase in Working Capital (WC): Open the ‘balance sheet’. Note current assets (CA) and current liabilities (CL). The formula for change in WC will be like this:Increase in CA = CA (Y2018) – CA Y(2017)Increase in CL = CL (Y2018) – CL (Y2017)Increase in WC = Increase in (CA – CL).New Debt: Open the ‘cash flow statement’. Go to ‘Cash flows from financing activities’. Note the numbers for ‘purchase and sale of capital assets’. Note the numbers mentioned against ‘Proceeds from borrowing’.Debt Repaid: Open the ‘cash flow statement’. Go to ‘Cash flows from financing activities’. Note the numbers for ‘purchase and sale of capital assets’. Note the numbers mentioned against ‘Repayment of borrowing’.Gather these values in your excel sheet and calculate the free cash flow (FCFE) as indicated below.Note the ‘Formula‘ column. This way, one can arrive at ‘free cash flow‘ numbers for a stock.Important points to note about free cash flow calculation:FCFE must always be positive.If a company is in expansion mode, its Capital Expenditure (CAPEX) will be high. High CAPEX often leads to lower FCFE. But such companies will eventually yield higher FCFE in times to come. The waiting time for FCFE to become positive can be 3+ years.Sudden increase in Current Assets (CA) compared to Current Liability (CL), will also lead to lower FCFE.A company relying too much on “long term debt” (year after year for longer duration of time) for enhancing its FCFE is not a good sign.Good companies rely less on debt. Their major cash comes from PAT & provisions of D&A.5.2. HOW TO ESTIMATE FCFE GROWTH (G)?In the above step we have estimated the Free Cash Flow (FCFE) of a stock.Now we must estimate the expected rate at which the above FCFE will grow in next 1 year time (g).There are two ways to do it, easy way and the difficult way.Easy way: Assume it to be 5% (g = 5% p.a). Logic, in India the average inflation over a period of last 10 years is close to 7.5% per annum. Over a period of time, a good company will make sure that its Free Cash Flow (FCFE) must beat the inflation rate. But this will happen only in long term. In shorter time horizon (like next 1 year), assuming a smaller growth rate (less than inflation) is better. Hence we can settle g=5%. If you want, you can repeat the calculation for other g values like 3%, 6% etc.Difficult way: Calculate the FCFE for last 5 years. See the trend and then make a safe assumption. But I will suggest that, initially do not go the difficult way. Downloading annual reports, searching data in the reports, preparing the excel sheet will take time. If you afraid of losing the interest, begin with the easy way. If after the calculation, the stock looks attractive, repeat the process using the difficult route. Another option can be, use my stock analysis worksheet.5.3 WHAT SHOULD BE THE EXPECTED RETURNS (K)?This step will be easy.But important Note: K must always be more than g. Here as well, I will suggest you to use a rule of thumb (k= 8% per annum).Logic, in a long time horizon (5+ years), Sensex/Nifty can grown at a rate of 12% p.a. But at present we are making an assumption for next 1 year only.Hence a smaller rate of return (w.r.t. 12%) shall be assumed. Hence I have settled for rate of return of g=8%.My suggestion will be to repeat the calculation with the following combination of “g & k” values:5.4 CALCULATION OF INTRINSIC VALUEWhat we have in hand till now?FCFE.FCFE Growth Rate – for next 1 year (g)Expected Return – for next 1 year (k)With these values we can estimate the intrinsic value of any stock using a formula.What is the formula? It is called Gordon Growth Model.Intrinsic value = Dividend / (k – g)But in our hybrid formula, we have replaced Dividend with FCFE. This way our new formula looks like this:Intrinsic Value = FCFE / (k – g)What is the logic for this alternation?In the Gordon Growth Model, dividend is taken in consideration as its ‘real earnings’ reaching the hands of investors. In other words, it is the dividends which is creating real value for the shareholders.The real value generator (dividend) in turn is determining the intrinsic value of the stock.Similarly, free cash flow has powers to create real value for the shareholders. How? In two ways:One: A part of FCF can be used to pay dividends to the shareholders.Two: Another part can be reinvested back into the business to fund future growth (resulting in capital appreciation).The real value generator (FCF) in turn is determining the intrinsic value of the stock (using the hybrid formula)Examples of intrinsic value calculation:6. BEST STOCKS ARE UNDERVALUEDHow to check if the above stocks are undervalued or not? Just follow the below 2 steps:Calculate IV/share (N): What is IV per share? Intrinsic value converted to per share value. How to do it? Get the ‘number of shares outstanding’ of the company from its financial reports. IV/share = Intrinsic value / N.Compare: Compare the calculated IV/share with the current market price of the stock. If IV/share is more than current price, the stock is undervalued.Now matter how strong is the underlying business, a stock cannot become a good buy till its market price is ‘undervalued’.7. NECESSITY OF STOCK ANALYSISThere are 5,000+ stocks currently trading in Indian stock market (BSE). Out of these, which are the best stocks?The answer is not easy. In fact, the answer is so unique that people who can find this answer have become millionaires.We common men can find this answer? Yes it is possible.But we have to follow a procedure. We can use two basic screening criteria’s. This will help to identify best stocks among ordinary ones.What is this screening criteria?When people undertake the process of intrinsic value estimation of a stock, they are actually following these 2 screening criteria:Screen #1: Remove fundamentally weak stocks. How it is done? Only those stocks whose free cash flow is positive are fundamentally strong.Screen #2: Remove overvalued stocks. How this is done? Only those stocks whose market price is less than its intrinsic value per share are undervalued.LIST OF BEST STOCKS TO BUY IN INDIA IN 2019(Updated on 13th-Aug-2019)Size: Size of Company in terms of its Market Capitalisation.M.Cap = Market Capitalisation in Rs.Crore.FCF: Free Cash Flow in Rs.Crore.FGR: Estimated Future Growth Rate for next 1Y (%).EGR : Expected Growth Rate for next 1Y (%).Valuation: Stock is undervalued or overvalued.StocksBOTTOM LINEI hope this post showed you a shortcut of how to estimate intrinsic value and find best stocks to buy.But you must be wondering, “how to put this theory into practice?”Well I’ve something special for you (My stock analysis worksheet).This worksheet can convert all this theory into actionable steps.Check the screenshot of the report generated by my worksheet:Kindly upvote my answer and follow me.

Why is Python so popular despite being so slow?

Yes, it can be up to 200x slower than C family. It's not just that its interpreted, since Lua can also be interpreted but is much faster. Mike Pall, the Lua genius, has said that it's because of some decisions about language design, and at least one of the PyPy guys agreed with him.Have tracing JIT compilers won? (read the whole thing).By the way, I should note that although web apps isn't my thing, latency does matter there too. Although even slow sites are relatively fast these days, I notice that we haven't yet hit the point where faster no longer matters. For some older comments from days where the numbers were bigger see here:Marissa Mayer at Web 2.0Google VP Marissa Mayer just spoke at the Web 2.0 Conference and offered tidbits on what Google has learned about speed, the user experience, and user satisfaction.Marissa started with a story about a user test they did. They asked a group of Google searchers how many search results they wanted to see. Users asked for more, more than the ten results Google normally shows. More is more, they said.So, Marissa ran an experiment where Google increased the number of search results to thirty. Traffic and revenue from Google searchers in the experimental group dropped by 20%.Ouch. Why? Why, when users had asked for this, did they seem to hate it?After a bit of looking, Marissa explained that they found an uncontrolled variable. The page with 10 results took .4 seconds to generate. The page with 30 results took .9 seconds.Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction.This conclusion may be surprising -- people notice a half second delay? -- but we had a similar experience at Online Shopping for Electronics, Apparel, Computers, Books, DVDs & more. In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.Being fast really matters. As Marissa said in her talk, "Users really respond to speed."A major problem for the future is datasets keep getting bigger and at a rate much faster than memory bandwidth and latency improves. I was speaking to a hedge fund tech guy at the D language meetup last night about this. His datasets are maybe 10x bigger than 10 years ago, and memory is maybe only 2x as fast. These relative trends show no sign of slowing - so Moore's Law isn't going to bail you out here.. He found that at data sizes for logs of 30 gig Python chokes. He also said that you can prise numpy from his cold dead hands as its very useful for quick prototyping. But, contra Guido, no Python isn't fast enough for many serious people, and this problem will only get worse. It has horrible cache locality, and when the CPU has to wait for memory access due to the data not being in the cache you may have to wait 500 cycles.Locality of referenceThat's maybe something rather valuable - you want productivity and abstraction, but not to have to pay for it. Andrei Alexandrescu may have one answer:http://bitbashing.io/2015/01/26/d-is-like-native-python.htmlThe Case for DProgramming in D for Python ProgrammersIn the past few decades, pursuing instant gratification has paid handsomely in many areas. In a world where the old rules no longer applied and things were changing quickly, you were much better off trying something and correcting course when it didn't work rather than being too thoughtful about it from the beginning.That seems to be beginning to change, and increased complexity is a big part of that. Rewriting performance-sensitive bits of your code in C sounds like you get the best of both worlds. For some applications that may be true. In other cases you may think that you had walked into a trap - so gratifying to have your prototype working quickly, but it may be that before you know it the project is bigger than you imagined, and at that stage it's not so easy to rewrite bits (and as you do that you now have to manage two code bases in different languages and the interface between them, and keep them in sync).Cython with memory views also seems like a great option, until you realize that you can't touch Python objects if you are writing a library (whether for your own use or for others) there is some possibility that you might want to use your code without engaging the GIL (global interpreter lock) - ie in multi-threaded mode. So in that situation you may end up depending on external C libraries for some purposes as python is off-limits. And that's fine, but yet more complexity and dependencies.On the other hand, here is how you can call Lua from D (and vice-versa is equally simple). So you get the benefits of native code with productivity and low-cost high-level abstraction but can still use a JITed scripting language if it suits your use case.JakobOvrum/LuaDimport luad.all;void main() {  auto lua = new LuaState;  lua.openLibs();  auto print = lua.get!LuaFunction("print");  print("hello, world!"); } Here’s how you write an Excel function in D that can be called directly as a worksheet function (I wrote the library with my colleagues helping):D Programming Language Discussion Forumimport xlld;  @Register(ArgumentText("Array to add"),  HelpTopic("Adds all cells in an array"),  FunctionHelp("Adds all cells in an array"),  ArgumentHelp(["The array to add"]))  double FuncAddEverything(double[][] args) nothrow @nogc {  import std.algorithm: fold;  import std.math: isNaN;   double ret = 0;  foreach(row; args)  ret += row.fold!((a, b) => b.isNaN ? 0.0 : a + b)(0.0);  return ret; } My point is that it’s a false dichotomy. It’s not fast and painfully unproductive vs slow and productive. You can have both if you have a bit of imagination and are prepared and able to make decisions based on the relevant factors rather than social proof.What Knuth actually said is a little more nuanced than the soundbite that his words have become (often used in conversation to terminate thought on a topic, when a little time pondering one's particular use case would bear dividends). He was saying don't waste time worrying about little low-level hacks to save a few percent unless you know it's important.; he wasn't talking about big choices like which language (and implementation) you use.There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.Page on archive.orgConsidering performance as one factor when you pick which framework you will be wedded to isn't premature optimization. It's prudent forethought about the implications because it's easier to take the time to make the right decision today than to change it later.By the way, he also said in the same article (we tend to hear what we want to hear and ignore the rest):In our previous discussion we concluded that premature emphasis on efficiency is a big mistake which may well be the source of most programming complexity and grief. We should ordinarily keep efficiency considerations in the background when we formulate our programs. We need to be subconsciously aware of the data processing tools available to us, but we should strive most of all for a program that is easy to understand and almost sure to work. (Most programs are probably only run once; and I suppose in such cases we needn't be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.)Python that isn't too clever may be easier to understand than old-school C/C++, but I am not sure that this is always the case when heavy metaprogramming gets involved (and nobody forces you to write old-school C/C++ today). Static typing does bring benefits in terms of correctness and readability, and some very smart people have spoken about this in explaining their choices of less popular frameworks:Caml Trading talk at CMUThere simply isn't an answer that applies to everyone - it depends on your circumstances, and what you are trying to achieve. But I would encourage you in the medium term to consider the possibility that python isn't the only high-level productive language that may be used to solve general purpose sorts of problems. And some of these other languages don't involve this kind of performance penalty, are better on the correctness front, and can interface to any libraries you might wish to use.I've alluded to one already, but there are others. Lua may be too simplistic for some, but it is fast. Facebook use it in their torch machine learning library, and you can run this from an ipython notebook. It's a big world out there - popular solutions get chosen for a reason, but when things change the currently popular solution isn't always the best option for the future.Addendum: a self-proclaimed 'Python fanboy' complained that I did not answer the question in this response. I think I did, although it's true that I would score nul points on the modern A-level style box-ticking approach to scoring exams. Whether that is a negative thing depends on your perspective!Those who are watching closely will also notice that the question I answered is different from the one into which it has been merged. So if you object about that part, take it up with the guy that merged it.Obviously Python is popular because it's gratifying to get quick results (libraries help too), and until recently performance didn't matter much since processor speed and even the modest advances in memory latency and bandwidth leapfrogged for a while our ability to make use of them. So why not 'waste' a few cycles to make the programmer's life easier since you aren't going to be doing much else with them.One can't stop there though, because the future may be different. Sandisk will have a 16 Tb 2.5" SSD next year. It will cost a few thousand bucks, and isn't going to be in the next laptop I buy for sure. But you can see which way the wind is blowing, because when people have large amounts of storage available they will find a way to use it, and memory simply shows no sign of keeping up. They are talking about doubling capacity every year or two. So in 10 odd years thats 7 doublings, which is 128x bigger than present capacity. Yet memory might be only 2x faster. Looks like I'll be able to get Gigabit internet in my area soon enough (whether I'll move house to take advantage of it is yet to be decided). It's a matter of time before that's commonplace, I should think.On top of that, modern SSDs are pretty fast. You can get 2.1 Gb/sec sequential read throughput using an M2 1/2 TB drive that costs less than 300 quid. (That's raw data - possibly even higher throughput if the data is compressed and you can handle the decompression fast enough). Yet it seems like the fastest JSON parser in the world takes 0.3 seconds to parse maybe 200 Meg of data (so 600 Meg/sec). Parsing JSON isn't exactly the most expensive text-processing operation one might want to do. So it doesn't seem like one is limited by IO in this case, necessarily! And that's today. and trends are only going one way.What is the best language to use in those circumstances? How long do you expect your software to last?Addendum 29th October 2016.A paper published in January 2016 by the ACM observes the following. It may not be true for everyone, and may not be true for many for a while yet. But my experience has been that as storage gets bigger, faster, and cheaper, people find a way to use it and the size of useful datasets increase, and I think it truly is a case of William Gibson’s “The future is already here - just unevenly distributed”.Non-volatile StorageFor the entire careers of most practicing computer scientists, a fundamental observation has consistently held true: CPUs are significantly more performant and more expensive than I/O devices. The fact that CPUs can process data at extremely high rates, while simultaneously servicing multiple I/O devices, has had a sweeping impact on the design of both hardware and software for systems of all sizes, for pretty much as long as we've been building them.This assumption, however, is in the process of being completely invalidated.The arrival of high-speed, non-volatile storage devices, typically referred to as Storage Class Memories (SCM), is likely the most significant architectural change that datacenter and software designers will face in the foreseeable future. SCMs are increasingly part of server systems, and they constitute a massive change: the cost of an SCM, at $3-5k, easily exceeds that of a many-core CPU ($1-2k), and the performance of an SCM (hundreds of thousands of I/O operations per second) is such that one or more entire many-core CPUs are required to saturate it.This change has profound effects:1. The age-old assumption that I/O is slow and computation is fast is no longer true:this invalidates decades of design decisions that are deeply embedded in today's systems.2. The relative performance of layers in systems has changed by a factor of a thousand times over a very short time: this requires rapid adaptation throughout the systems software stack.3. Piles of existing enterprise datacenter infrastructure—hardware and software—are about to become useless (or, at least, very inefficient): SCMs require rethinking the compute/storage balance and architecture from the ground up.Addendum: March 2017Intel 3D XPoint drives are now available, although they aren’t cheap. Their I/O performance means it’s increasingly difficult to say that you’re necessarily I/O bound. Emerging storage today has 1,000 times better latency than NAND flash (SSD drives), and is only 10 times worse latency than DRAM. Overnight that means the bottleneck moved away from storage to processors, the bus, the kernel and so on - the entire architecture, but that includes applications and server processes. Guido’s claim that Python is fast enough may still be true for many applications. But not if you are handling decent amounts of data.These new storage technologies won’t change everything overnight. But they’ll get cheaper and more widespread quickly enough. And that will have implications for the future when it comes to making the right decisions about language choices. Because it’s empirically true that what’s possible as regards language implementations depends on awful lot on language design - they are intimately coupled. If you want to make the most of emerging storage technologies, it’s unlikely in my view that Python will in general be the right tool for the job, even if it was a decade back.Some people here say some things that appear to make sense but are simply not right. Python is slow not because it is interpreted, or because the global interpreter lock (GIL) gets in the way of python threads - those things only make it worse. Python is slow because language features that have been there by design make it incredibly difficult to make it fast. You can make a restricted subset fast - there’s no controversy about that. But what I say is also what Mike Pall, the LuaJIT genius has said, and the authors of PyPy agreed with him.Here is what the author of Pyston - the Dropbox attempt to JIT Python (they gave up because it was just too difficult) - has to say about why Python is slow.Why is Python slowThere's been some discussion over on Hacker News, and the discussion turned to a commonly mentioned question: if LuaJIT can have a fast interpreter, why can't we use their ideas and make Python fast? This is related to a number of other questions, such as "why can't Python be as fast as JavaScript or Lua", or "why don't you just run Python on a preexisting VM such as the JVM or the CLR". Since these questions are pretty common I thought I'd try to write a blog post about it.The fundamental issue is:Python spends almost all of its time in the C runtimeThis means that it doesn't really matter how quickly you execute the "Python" part of Python. Another way of saying this is that Python opcodes are very complex, and the cost of executing them dwarfs the cost of dispatching them. Another analogy I give is that executing Python is more similar to rendering HTML than it is to executing JS -- it's more of a description of what the runtime should do rather than an explicit step-by-step account of how to do it.Pyston's performance improvements come from speeding up the C code, not the Python code. When people say "why doesn't Pyston use [insert favorite JIT technique here]", my question is whether that technique would help speed up C code. I think this is the most fundamental misconception about Python performance: we spend our energy trying to JIT C code, not Python code. This is also why I am not very interested in running Python on pre-existing VMs, since that will only exacerbate the problem in order to fix something that isn't really broken.I think another thing to consider is that a lot of people have invested a lot of time into reducing Python interpretation overhead. If it really was as simple as "just porting LuaJIT to Python", we would have done that by now.I gave a talk on this recently, and you can find the slides here and a LWN writeup here (no video, unfortunately). In the talk I gave some evidence for my argument that interpretation overhead is quite small, and some motivating examples of C-runtime slowness (such as a slow for loop that doesn't involve any Python bytecodes).One of the questions from the audience was "are there actually any people that think that Python performance is about interpreter overhead?". They seem to not read HN :)Update: why is the Python C runtime slow?Here's the example I gave in my talk illustrating the slowness of the C runtime. This is a for loop written in Python, but that doesn't execute any Python bytecodes:import itertoolssum(itertools.repeat(1.0, 100000000)) The amazing thing about this is that if you write the equivalent loop in native JS, V8 can run it 6x faster than CPython. In the talk I mistakenly attributed this to boxing overhead, but Raymond Hettinger kindly pointed out that CPython's sum() has an optimization to avoid boxing when the summands are all floats (or ints). So it's not boxing overhead, and it's not dispatching on tp_as_number->tp_add to figure out how to add the arguments together.My current best explanation is that it's not so much that the C runtime is slow at any given thing it does, but it just has to do a lot. In this itertools example, about 50% of the time is dedicated to catching floating point exceptions. The other 50% is spent figuring out how to iterate the itertools.repeat object, and checking whether the return value is a float or not. All of these checks are fast and well optimized, but they are done every loop iteration so they add up. A back-of-the-envelope calculation says that CPython takes about 30 CPU cycles per iteration of the loop, which is not very many, but is proportionally much more than V8's 5.I thought I'd try to respond to a couple other points that were brought up on HN (always a risky proposition):If JS/Lua can be fast why don't the Python folks get their act together and be fast?Python is a much, much more dynamic language that even JS. Fully talking about that probably would take another blog post, but I would say that the increase in dynamicism from JS->Python is larger than the increase going from Java->JS. I don't know enough about Lua to compare but it sounds closer to JS than to Java or Python.”

People Like Us

Really easy to use and I've used it since the beginning they started this. Great customer service too!

Justin Miller