Student Evaluation Form L: English As A Second Language Course L: Fill & Download for Free

GET FORM

Download the form

How to Edit Your Student Evaluation Form L: English As A Second Language Course L Online In the Best Way

Follow these steps to get your Student Evaluation Form L: English As A Second Language Course L edited for the perfect workflow:

  • Select the Get Form button on this page.
  • You will enter into our PDF editor.
  • Edit your file with our easy-to-use features, like signing, erasing, and other tools in the top toolbar.
  • Hit the Download button and download your all-set document for reference in the future.
Get Form

Download the form

We Are Proud of Letting You Edit Student Evaluation Form L: English As A Second Language Course L Like Using Magics

Find the Benefit of Our Best PDF Editor for Student Evaluation Form L: English As A Second Language Course L

Get Form

Download the form

How to Edit Your Student Evaluation Form L: English As A Second Language Course L Online

When you edit your document, you may need to add text, attach the date, and do other editing. CocoDoc makes it very easy to edit your form into a form. Let's see the easy steps.

  • Select the Get Form button on this page.
  • You will enter into our PDF editor web app.
  • Once you enter into our editor, click the tool icon in the top toolbar to edit your form, like signing and erasing.
  • To add date, click the Date icon, hold and drag the generated date to the field you need to fill in.
  • Change the default date by deleting the default and inserting a desired date in the box.
  • Click OK to verify your added date and click the Download button once the form is ready.

How to Edit Text for Your Student Evaluation Form L: English As A Second Language Course L with Adobe DC on Windows

Adobe DC on Windows is a popular tool to edit your file on a PC. This is especially useful when you deal with a lot of work about file edit in the offline mode. So, let'get started.

  • Find and open the Adobe DC app on Windows.
  • Find and click the Edit PDF tool.
  • Click the Select a File button and upload a file for editing.
  • Click a text box to give a slight change the text font, size, and other formats.
  • Select File > Save or File > Save As to verify your change to Student Evaluation Form L: English As A Second Language Course L.

How to Edit Your Student Evaluation Form L: English As A Second Language Course L With Adobe Dc on Mac

  • Find the intended file to be edited and Open it with the Adobe DC for Mac.
  • Navigate to and click Edit PDF from the right position.
  • Edit your form as needed by selecting the tool from the top toolbar.
  • Click the Fill & Sign tool and select the Sign icon in the top toolbar to make you own signature.
  • Select File > Save save all editing.

How to Edit your Student Evaluation Form L: English As A Second Language Course L from G Suite with CocoDoc

Like using G Suite for your work to sign a form? You can make changes to you form in Google Drive with CocoDoc, so you can fill out your PDF to get job done in a minute.

  • Add CocoDoc for Google Drive add-on.
  • In the Drive, browse through a form to be filed and right click it and select Open With.
  • Select the CocoDoc PDF option, and allow your Google account to integrate into CocoDoc in the popup windows.
  • Choose the PDF Editor option to begin your filling process.
  • Click the tool in the top toolbar to edit your Student Evaluation Form L: English As A Second Language Course L on the Target Position, like signing and adding text.
  • Click the Download button in the case you may lost the change.

PDF Editor FAQ

Can you explain to non-coders the most impressive code you've seen?

Don’t be surprised if understanding this answer would take you many days, weeks or even months. But if you value intellectual development for the sake of itself, I think you should be satisfied. Frankly speaking, I’ve found most other answers here completely disappointing.Also, if you find something unclear, or if you think that some parts could be explained better, feel free to leave a comment.IntroductionThe most impressive code that I have seen comes from Daniel Friedman and William Byrd, who came up with the idea of “running the evaluator backwards”.This phrase, however short, might be impenetrable to non-programmers, because it refers to some notion of “evaluator” and uses a rather unclear metaphor of “running things backwards”, both of which I'm going to explain.Historical and conceptual backgroundBut, to get a clearer picture, we should go back to the times before computers were invented. In those times, people were doing a lot to improve the language of mathematics, in particular to clarify the basic ideas and to refine the mathematical notation. They managed to develop some systems — called formal systems — which were precise and which could encode all the prior mathematical knowledge.These formalisms changed the perspective of mathematical inquiries: since mathematical theorems and questions are just sequences of symbols, perhaps we could forget about the meanings of those symbols, and just look at their form, and still arrive at interesting conclusions?This idea was put to an extreme by a great and famous mathematician known as David Hilbert. (Since the targetted audience of this answer consists of “non-coders”, I suppose that pasting his picture here might be a good idea.)He suggested, in 1928, that there might exis a systematic procedure, which takes a formal representation of a mathematical “yes-no” question, and solves it in a way that only requires applying deterministic rules, and does not require thinking or understanding the subject matter, and produces an answer (obviously, either “yes” or “no”).This question, known as Entscheidungsproblem (which is German for “decision problem”), was a direct inspiration for the invention of computers.In 1936, a young mathematician called Alan Turing invented a model of computation known today as “Turing machine”, which was used to show that Hilbert’s problem does not have a general solution: that if we encode some questions in a formal language, then every systematic attempt to answer them will result in a process that will never terminate.Since you’re a non-coder, and you’re accustomed to wasting your miserable life on Instagram, you’re probably interested how Turing looked like. There you go:A Turing machine has been described as a machine that can read symbols from a (potentially infinite) tape of “memory cells” and write symbols to that (or some other) tape. Each cell must be able to store at least one bit of information.The machine can roll the tape one cell left or one cell right. An important part of the description of a machine is its state transition function: there is a finite amount of states that a machine can be in.Designing Turing machines which perform basic aritmetic operations such as addition or multiplication is a common exercise among Computer Science students. I don’t want to get too technical with this yet, but I want you to embrace one critical idea that Turing had, which is that you candesign a Turing machine which reads a description of another Turing machine and an input for that machine, and which simulates the operations of the machine whose description it read.If you’ve ever used a Nintendo emulator on your smartphone, this probably shouldn’t be very surprising to you. But this simple idea is actually at the core of computing. Your operating system is a process which manages descriptions of other processes. Obviously, the authors of your operating systems had to form a description of this process.I want to turn your attention to this interplay between “description” and “process”. The process generated from a given description will depend on how the hosting process “understands” that description.But where does this “understanding” in the hosting process come from?It might come from another hosting process (if it happens to be “simulated” on some other machine), or it might come from the laws of physics (when a physical device such as a microprocessor “interprets” that description). Or it might come from you, from your understanding, if you’re, for example, simulating a Turing machine in your head (which is something that humans are perfectly capable of, at least as long the machines aren’t too complex).The evaluatorLet’s now move some two decades ahead, to the late 1950s. Some real computers already existed at that time, and — in addition to performing physical simulations and engineering computations, some people were trying to do some more ambitious things with them. In particular, they wanted to provide those real computers with human-like intelligence.Among them was John McCarthy. Here’s the obligatory picture:The thing that McCarthy is famous for is the invention of the LISP programming language.The fundamental problem that LISP aimed to solve was to conceive a system (or a notation) that could be used as general knowledge representation that would be easy to work with for both humans and machines.Initially LISP was many things, but the thing that is mostly associated today with is the notation that McCarthy invented for representing data. It is called “s-expressions”. An s-expression is either atomic expression (sequence of characters that do not contain whitespaces or parentheses and do not consist of a single dot), or a sequence of s-expressions separated by whitespaces and surrounded by a pair of matching parentheses. (If there’s more than one element, then the last element can be preceded with a dot.)Here are some examples (each line contains one s-expression, possibly compound):2 + (+ 2 2) (a . b) (a b . c) (a . (b . c)) () define (+ (* 2 3) (/ 4 5)) (define (abs n) (if (< n 0) (- n) n)) The dot has a special meaning because of another design goal of LISP: it was intended as a system for LISt Processing (hence its name).LISP was built around the following idea: a list can either be empty, or it can be something consisting of a first element (head) attached to a list containing the remaining elements (tail).This explains how we can build complex structures (by prepending elements to some existing simpler structures) but also how we can decompose those structures.Consequently, every list is built from elementary structures called pairs. A pair consisting of some elements b and c can be written down as (b . c). If we construct a new pair whose head is a and whose tail is that previous pair, we get (a . (b . c)). But in LISP this can also be written down more conveniently as (a b . c). Moreover, if c (more generally called “last tail”) is an empty list (), then the face of the expression is further simplified to (a b).The things that I’ve described so far may seem unfamiliar, abstract or even weird, but they aren’t difficult. Actually, they reflect a necessary property of every action: either we're already done, or there is still something left to do.Lists in LISP can be nested arbitrarily, which makes the LISP notation suitable to represent trees with any number of branches in a very simple and straight-forward manner.Why is this important?It turns out that, for some reason, trees appear a lot in language processing. Expressions in many diffent languages, such as English, the language of mathematics or most programming languages can be analyzed and visualized in the form of trees. For example, the parse treecan be written down using the notation invented by McCarthy as(S (N John) (VP (V hit) (NP (D the) (N ball)))) I don’t really know why trees are so ubiquitous. Probably because we can only understand complex things by analyzing them in simpler and more familiar terms (and, similarly, we can only build complex things from simpler ones).LISP provided a way of expressing things that was easy to analyze and process to both humans and computers, which is why it has been the main work horse of Artificial Intelligence for a few decades.But the above description doesn’t explain everything there is to be explained about LISP. Except the weird but simple notation, LISP brought with itself a particular idea of how to perceive computations, namely — by reducing complex expressions to simplest possible terms.This idea wasn’t original in the sense that it was already widespread in mathematics. For example, under the common rules of arithmetics, the expression(+ 2 2) should be reduced to 4.What was perhaps more novel was that, in addition to numbers, LISP was also able to process symbols (one of its first applications was symbolic differentiation) and functions.A function is something that, when receives an argument (or a few arguments), produces a value. For example, the expression(function (x) (* x x)) denotes a function which takes one argument, and multiplies it by itself. We could apply this function to some argument, say, 5:((function (x) (* x x)) 5) As you could expect, this expression should reduce to 25.Expressions like this quickly get rather hairy, and to simplify them, we often name some intermediate terms. For example, a common name for a function which multiplies its argument by itself is ‘square’.(define square (function (x) (* x x))) This allows us to write an expression equivalent to the above one as:(square 5) which is much easier to read.But readability isn't the sole purpose of naming things. One powerful feature of LISP was that it allowed do express ideas using recursion or a self-reference. But, if you know anything about recursive definitions, you should realize that in order to make them work, we need to be able to specify at least one non-self-referential case (otherwise the joke that “in order to understand recursion, you only need to understand recursion” wouldn’t be funny. Oh, wait a minute!). LISP provides the if form, which can be used precisely for that.A simple example usage of the if form is the definition of absolute value, which, when given a negative argument, gives a negated argument (to make it positive), and otherwise it just returns its argument left intact:(define absolute-value  (function (x)  (if (< x 0)  (- x)  x))) As you can see, if has the following form:(if <condition>  <consequent>  <alternative>) It first evaluates <condition>, and when its result is true, the value of the whole expression becomes the value of <consequent>, and otherwise it becomes the value of <alternative>.So for example,(absolute-value -5) can be replaced with(if (< -5 0)  (- -5)  -5) Now the value of the condition (< -5 0) is true (because -5 is smaller than 0), so the value of the whole expression becomes the value of the consequent (- -5), which is 5.On the other hand, if we wanted to know the value of(absolute-value 6) then we know that it's equivalent to(if (< 6 0)  (- 6)  6) Now the value of the condition (< 6 0) is false (because 6 isn’t smaller than 0), so the value of the whole expression becomes the alternative 6.I hope that you find this obvious to the point of being boring. The if form can be used to define common logical connectors like conjunction (and) and disjunction (or). Those are typically special ‘syntactic forms', such that(and <a> <b>) is interpreted as(if <a> <b> #false) and(or <a> <b>) is interpreted as(if <a> #true <b>) (The names for logical values true and false begin with the # character to make sure that no one could redefine them).In case you ever wondered, you can define logical negation as a function:(define not  (function (condition)  (if condition  #false  #true))) I hope that you didn't find the examples that I used so far, very off-putting. They relied on some mathematical knowledge of things like numbers and some operations on them (multiplying, subtracting and comparing). I did that, because I thought that many people are familiar with things like “squaring” or “absolute values”, and so I thought that it should be easier to get to them by leveraging the things they already know.But, while this can help some people, it can also be a barrier to other people. So while there are many well known recursive functions that operate on numbers, before I get on to recursion, we shall elaborate a language for talking about symbolic expressions.We already said something about them when we were discussing the s-expressions. We said that a list can either be empty or it can consist of a head and a tail (where the tail is also a list).We also said that you can construct a list by prepending an element to the front of some existing list.The basic operation of prepending an element to the front of a list is traditionally called cons. In particular, if y is the list (1 2 3), then(cons 0 y) will produce the list(0 1 2 3) Or, speaking more broadly, the operation(cons a b) produces a pair(a . b) for any two values a and b.It should therefore be clear that, for example, the expression(cons 1 (cons 2 (cons 3 4))) would evaluate to(1 2 3 . 4) Given a value, we can ask whether it is a pair. The pair? function does that, so for example the value of the expression(pair? (cons 1 2)) is true, whereas the value of the expression(pair? 5) is false. (It is a common convention among some programmers to use names that end with question mark for functions which answer yes-no questions.)Having a list resulting from evaluation of expression (cons a b), you can obtain the head and the tail of that list using the head and tail functions:(head (cons a b)) produces a, and(tail (cons a b)) produces b.These four functions, cons, pair?, head and tail, are sufficient for working with arbitrarily complex tree structures. But it is also convenient to be able to refer directly to the first few elements of a list:(define first  (function (list)  (head list)))  (define second  (function (list)  (head (tail list))))  (define third  (function (list)  (head (tail (tail list)))))  (define fourth  (function (list)  (head (tail (tail (tail list)))))) You can imagine this going on as long as you like (but if you have the urge to talk about a larger number of elements, there are usually better ways to achieve that).LISP also provides a convenience function called list, such that, say(list 1 2 3) produces the list (1 2 3). Of course, you could build the same list using the cons operator, so the list function doesn’t add anything to the expressive power of our language. If we wanted to, we could leverage the way LISP handles many arguments to define it ourselves (but it would feel like cheating):(define list  (function args  args)) (Note the lack of parentheses around the first occurrence of args)I mentioned before that LISP can deal with symbolic expressions. Since expressions are essentially tree structures, we have the second part of the phrase “symbolic expressions” covered. But what about the “symbolic” part?We have already seen some symbols in the programs that we wrote, like +, function, x, *, define, square, cons and so on. Some of those symbols had their meanings. Others received some meaning from us: we used the define form to assign some meanings to symbols like square or absolute-value.But we never talked about symbols themselves — only about their meanings.LISP provides a special operator called quote, which allows to refer to symbols and symbolic expressions, rather than their meanings.For example, the value of the expression(quote x) is the symbol x. There is a built-in function symbol? which says whether a given object is a symbol or not.But, in addition to quoting symbols, you can also quote whole expressions. The value of the expression(quote (+ 2 2)) is a list of three elements, whose first element is the symbol +, and whose second and third element are instances of the number 2.The expression (quote x) can be abbreviated as 'x (which makes the notation more succinct).One of the most important properties of a symbol is its identity. Roughly speaking, two objects are considered the same symbol if they are both symbols and they “look the same”. So for example the symbol x is the same as the symbol x, and the symbol y is the same as the symbol y, but the symbol x is not the same as the symbol y.This may sound like an obvious thing to say, but actually the notion of identity is extremely confusing, and almost every programming language gets it somehow wrong. The particular primitive notion of identity in LISP is associated with the eq? predicate. Two LISP objects are considered eq? if they are located in the same place of computer memory — and once a symbol is read into memory, it becomes identified with its location in that memory. (When a symbol is read, the system checks whether it has already seen a symbol like that)But this isn't the case for pairs. Each time you call the cons function, it may return a new pair from some free area of memory. For this reason, it may not be the case (and generally isn't), that, say, (eq? '(1 2 3) '(1 2 3)).The only list which is guaranteed to be eq? to every other list with the same content is the empty list: (eq? '() '()) is always true.But we can use this primitive notion of identity to implement a more intuitive one, where the lists that contain the same elements in the same order would be considered the same.(define equal?  (function (a b)  (or (eq? a b)  (and (and (pair? a)  (pair? b))  (and (equal? (head a)  (head b))  (equal? (tail a)  (tail b)))) ) ) ) As you can see, we have referred to the term being defined in the body of this definition. But that's ok, because we have one non-recursive case, and the size of arguments will shrink towards that case with each recursive call.Another example of recursion can be found in the definition of the append function, which takes two lists and appends them together, so for example (append '(a b) '(c d)) would produce the list (a b c d).The base case of the recursion is when one of the lists is empty — in such case, we'd just need to return the other list.As to the recursive step, let'd recall that the cons operation is only able to prepend a new element to the front of some existing list — so it should make sense to prepend elements from the first list to the second.Obviously, (append '() b) should be b. If we wanted to prepend a single element list, then we'd just need to cons this element to the second list:(append '(a) b) is the same as(cons 'a b) It may take some insight to observe that b is the same as (append '() b), and that '() happens to be the tail of '(a).This prompts us with the following definition:(define append  (function (a b)  (if (pair? a)  (cons (head a)  (append (tail a) b))  b))) The heavily parenthesized syntax of LISP may not be particularly readable. In languages with richer syntax, such as Haskell, you could express the same idea in two lines of code:append (h:t) b = h:(append t b) append [] b = b Here, the : operator served the purpose of cons. We don't need the head and tail functions, because using that operator on the left hand side of the = operator allows us to name the head and the tail of a list however we like (here we used the names h and t).An even simpler and more natural example of a recursive definition is a function that takes a list of elements and a unary function, and produces a new list, resulting from applying the function to each element. It is customarily called map, so you can expect that(map square '(1 2 3)) would produce(1 4 9) The definition should follow this reasoning: if the list of elements is empty, then the result must also be empty. If it's non-empty, then we need to apply the function to the first element, and cons it with the result of map on the tail of the list.The definition of map in Haskell would look like this:map f (h:t) = (f h):(map f t) map f [] = [] In LISP, an equivalent definition would be:(define map  (function (f l)  (if (pair? l)  (cons (f (head l))  (map f (tail l)))  '()))) Those examples should give us some taste of recursive thinking. It may seem that, since Haskell version is probably easier to grasp, it would make no sense to stick to this pile of nested parentheses.But the spartan syntax of LISP has a virtue that no other syntax I know has: the expressions of the language, which is suited for processing lists of symbols, are themselves just lists of symbols.So far we have learned about the following four core forms of the LISP language, namely: define, function, if and quote (we also learned about some built-in functions such as *, -, cons or eq?, but that’s not important at this moment).I have (somewhat) explained — using English prose — how those forms should be understood.Now here's the fun part: LISP is also a language, and it can itself be used to explain the meaning of the core forms of LISP. This is a very similar idea to that of Universal Turing Machines that I mentioned earlier.Normally, a programming language comes with a bunch of built-in functions (like number multiplication) that are associated with some names (like the * symbol). This association can be captured in a structure called environment.An environment can be represented by a list of pairs whose heads are names, and whose tails are values. For example, a structure that would hold some built-in LISP functions discussed here could be defined as:(define initial-environment  (list  (cons '* *)  (cons '+ +)  (cons '- -)  (cons '= =)  (cons '< <)  (cons 'list list)  (cons 'cons cons)  (cons 'head head)  (cons 'tail tail)  (cons 'pair? pair?)  (cons 'symbol? symbol?)  (cons 'eq? eq?))) In order to use those functions, we need to be able to look them up by name. Assuming that an association is present in our list, it can either be the first element of the list, or it can be somewhere on the rest of the list:(define lookup  (function (key dictionary)  (if (eq? (head (head dictionary))  key)  (tail (head dictionary))  (lookup key (tail dictionary))))) For the purpose of presentation, we can assume that a LISP program can be seen as a sequence of definitions followed by a single expression. Then, each definition wiill augment the environment with a proper binding. Running a program will mean reading all the definitions and then evaluating the final expression:(define run  (function (prog env)  (if (is? (head prog) 'define)  (run (tail prog)  (cons  (cons (second (head prog))  (third (head prog)))  env))  (value (head prog) env)))) Where is? is a simple utility function that checks whether the head of an expression is eq? to a particular symbol:(define is?   (function (exp type)  (and (pair? exp)  (eq? (head exp) type)))) The actual point of the run function is just to collect the values from definition into environment, so(run '((define x1 v1)  ...  (define xN vN)   expr)  env) is equivalent to running(value 'expr (append  (list  (cons 'xN 'vN)  ...  (cons 'x1 'v1))  env)) The heart of the evaluator is the value function, which takes an expression and an environment and returns the value of that expression. Since there are a few different types of expressions — quote, function, if, and, or, application, symbol and numeric or boolean value, the function will need to check the type and act accordingly. The following definition of value is scattered with my explanations(define value   (function (exp env) If the expression is a symbol, then we need to look up its value in the environment (to make the recursion work, we also calculate the value after the lookup): (if (symbol? exp)  (value (lookup exp env) env) In case of quote, we simply return the quoted expression: (if (is? exp 'quote)  (second exp) In case of if, we first evaluate the condition (second subexpression). If it succeeds, we return the value of the consequent (third subexpression), and otherwise we return the value of alternative (fourth subexpression): (if (is? exp 'if)  (if (value (second exp) env)  (value (third exp) env)  (value (fourth exp) env)) In case of and and or, we transform them to use the if form, and then evaluate: (if (is? exp 'and)  (value (list 'if (second exp)  (third exp)  #false)  env)  (if (is? exp 'or)  (value (list 'if (second exp)  #true  (third exp))  env) We can consider function to be self-evaluating: (if (is? exp 'function)  exp The case of function application will be explained shortly. Note that we apply the value of the head of a function to the values of arguments: (if (pair? exp)  (applied (value (head exp) env)  (map (function (arg)  (value arg env))  (tail exp))  env) If the expression does not belong to any of the aforementioned types (e.g. it is a number or a boolean value), then we do not reduce it any further. exp))))))))) This ends our definition of value. (Aren't all those closing parentheses beautiful?)In case of function application, we need to consider two cases. If we are applying a function provided by a host implementation, then we need to delegate the application to the host. Otherwise we shouldextract the argument list from function (that’s second element, right after the function keyword)extend the environment with new bindings which map argument names to the valuesrun the body of the function (that is, tail of tail) in this extended environment (we use run rather than value, because this allows to use the define form inside the function form)This is how this might be done in practice:(define applied  (function (operator arguments env)  (if (host-function? operator)  (host-apply operator arguments)  (run (tail (tail operator))  (extended env  (second operator)  arguments))))) We assume that the host provides the functions host-function? and host-apply for us. This assumption is convenient for testing, but if you find this confusing, you can ignore it and assume that there are no built-in functions and that the initial environment is empty.The only thing that’s left to explain is how to extend the environment with bindings from argument names to argument values. We have essentially two cases to consider here: if the list of arguments is empty, then we are done. Otherwise we take the first argument, bind it with the first value, and extend the environment with the tails of argument names and values.In order to handle functions with variable number of arguments (such as list), we can also consider a third case: if the tail of argument names is a symbol, then we bind this symbol with the whole list of values. In order to prevent the values from being evaluated too many times, we quote them:(define extended  (function (env names vals)  (if (pair? names)  (extended  (cons (cons (head names)  (list 'quote  (head vals)))  env)  (tail names)  (tail vals))  (if (symbol? names)  (cons (cons names   (list 'quote   vals))   env)  env)))) That’s it. This program, whose entry point is — in our case — the run function — is called “metacircular evaluator” (the “metacircular” adjective is just a fancy name of saying that it implements the language that it is itself implemented in. In a sense, you can think of it as a program whose purpose is to explain itself).It isn’t very useful by itself — after all, the only thing it can do is slow down the execution of your program (assuming that you already have a running LISP interpreter on your machine — otherwise it just won’t work). It wasn’t intended to be useful, though — John McCarthy, who conceived it, intended it as a program for reading.However, one of his students, Steve Russell, took this program and translated it to the machine language of the IBM-704 mainframe computer, which made LISP a practical programming system. This happened in 1958, and this is how the computers looked like back then:So we now know what an evaluator is: it is a program (or a device) which attributes interpretations to other programs. We managed to fit it in a few dozen lines of code, using a handful of definitions.At least we understand one word from the phrase “running the evaluator backwards” — what’s left is to explain what we mean by the phrase:Running things backwardsSo far, we have presented a model of programming where computation was modelled as a reduction of complex expressions to the simplest terms. This isn’t the only conceivable model, and we have already mentioned a different one (Turing machines).Now we’re going to see a yet different model, which resembles the work of a detective.What detectives do is they gather evidence and build possible images of some situation. Their goal is to settle facts, to synthesize knowledge from different sources and to spot possible inconsistencies between different testimonies. The ideal that they try to pursue is the truth.So far, the notion of truth manifested itself in our programs in that we could ask some yes-no questions (like “is this number smaller than 0?”) and act differently depending on the answer. But sometimes we perform the reasoning in the different direction: we assume that some sentence is true, and then we ask what conditions need to be satisifed for that to be the case.In this model, we can imagine that a program consists of some facts and rules that are scattered around and contain some scraps of knowledge, and our task is to synthesize a consistent world view (or to report that different scraps of information are contradictory and that no such world view can exist).One question is how to represent this knowledge. When I was introducing LISP, I suggested that — in addition to being a programming model — it provided a notation for representing knowledge (which essentially was a format for representing trees as sequences of symbols and parentheses). The idea of representing knowledge as trees seems to be a good one, but we should also be able to represent our “lack of knowledge”, or the fact that our knowledge is often incomplete — for example, we might know that “someone stole the car”, but we don’t know who it was.The structures that represent incomplete knowledge are also useful for representing questions. Broadly speaking, you can divide questions into two groups: “yes-no” questions (“did someone steal the car?”, “did John steal the car?”) and questions containing pronouns (“who stole the car?”, “what was stolen?”).Of course, we understand that the role of pronouns like who, what or some, and nouns like something or somebody or everything, is different than the role of most nouns and proper names: usually, nouns are used to refer to some known things (like “plants are green” or “it was John who stole the car”).There is also a very significant difference between general sentences like “all plants are green”, and particular sentences like “John stole the car”. In the case of general sentences, we ought to understand them as rules:whatever is a plant, must also be greenor equivalentlyfor any X, if X is a plant, then X is greenOn the other hand, particular sentences are just unique facts about the world. The sentence “John stole this car” only says something about a single particular event that occured once, and doesn't allow us to conclude that “John steals every car” or anything like that.To summarize, we'd like our programming system to be able to represent (at least) the following:factsrulesincomplete knowledge/questionsIf you look at the examples that I gave so far, you can observe that they either stated that some objects have some properties (plants are green), or that some relation between some set of objects holds (John stole the car).Since sentences in natural language such as English tend to be ambiguous, we are going to use LISP's s-expressions to represent our relations. We are going to write them down as follows:(<relation> <object-1> <object-2> ...) (where properties are just relations with one argument). For example(stole 'John 'the-car) (green 'this-tree) We use quotation to represent individual names — remember that we also want to be able to represent things like pronouns (who, what, someone, everything etc.). Since — in more complex situations — using pronouns could be ambiguous, we're going to allow arbitrary (unquoted) symbols to be used in this role.So for example if we ask our system(stole 'John what) (or: what did John steal?), we expect that one of the possible answers would be something like((what . the-car)) Now that we chose some (rather straightforward) representation for relations, we need to come up with a way of representing rules and facts.People who developed this style of programming suggest to write the consequence of the rule before the conditions that trigger that rule, so we could write them as:(conclude <consequence> from <conditions> ...) where <consequence> is a relation, and <conditions> is zero or more relations. The interpretation is that all of the <conditions> need to be satisfied to be able to derive the <consequence>.If there are zero <conditions>, then the <consequence> is assumed to be true — in such case we are dealing with a fact, and we could use a slightly simplified notation for such situations, i.e. skip the from keyword:(conclude <fact>) We could therefore write our examples as:(conclude (stole 'John 'the-car)) (conclude (green X) from (plant X)) Now we have an idea how to augment our model with facts and rules. The only thing that’s left to design is how to ask our model questions.We need to be careful here, because in general there can be more than one answer to our question (John could have stolen more than one thing, for instance). Moreover, in some situations there could even be infinitely many answers to our question, and in such cases, the program could consume all the available memory and never finish anyway.So our interface for asking questions should take this fact into account, and enable us to limit the number of answers that we get. So for example, we could write(some 5 (stole 'John what)) to get an answer which could look like(((what . the-car))  ((what . the-necklace))  ((what . the-money))  ((what . the-bracelet))  ((what . Janes-heart))) This should give us some idea how to talk to our system.We are now going to demonstrate how this relational model of computation should be able to express some things that were expressible in the functional model from LISP. Then we are going to embed this model in the LISP model, that is, have our conclude and some forms processed by some LISP functions.Let’s recall the definition of append (for brevity, we’re going to use the Haskell version here):append (h:t) b = h:(append t b) append [] b = b In the relational model, there are no “function arguments” and “function values”. All we have is relations that hold (or “are true”). But we can emulate the behavior of functions by adding extra argument. In the relational model, that’s going to represent function’s value (or output, if you will). Consequently, we can represent the base case of the recursion as the following rule (one thing I didn’t mention about our language was that relation objects can be lists — but this shouldn’t be too surprising, as we really don’t have many other ways to represent complex objects):(conclude (appending () b b)) The recursive step can be expressed using the following rule:(conclude (appending (h . t) b (h . l))  from (appending t b l)) For example, we can expect that the following query should be true:(some 1 (appending (1 2) (3 4) result)) Let’s think for a second how this should work. Our reasoning engine is going to search through all the applicable rules for the appending relation.The first rule (the base case) is not going to be useful here, because the lists () and (1 2) differ.But the second rule is going to match with h having the value 1, t having the value (2), b having the value (3 4) and result having the value (1 . l), provided that the condition (appending (2) (3 4) l) succeeds.To answer this question, we need to search our rules again, this time for the relation (appending (2) (3 4) l).The first rule fails, because the lists () and (2) differ. The second rule matches, with the h’ variable having the value 2, t’ having the value (), b’ having the value (3 4) and l having the value (2 . l’), provided that the condition (appending () (3 4) l’) succeeds.To answer this question, we need to search our rules again, this time for the relation (appending () (3 4) l’).Now the first rule succeeds, because the first argument is the empty list (). This allows us to conclude that l’ is the list (3 4). We can step back to see that l is (2 . l’), so we can conclude that it’s (2 3 4). Making a similar step leads us to the conclusion that result, which is (1 . l), must be (1 2 3 4).But we can now do things that the functional model wasn’t capable of doing easily. We called the last argument result to suggest that it stores the “result” of what was meant to be a “function”.But from the perspective of our programming model this argument isn’t any different from any other argument. So we could ask, for example, what list needs to be appended to the list (3 4) to obtain the list (1 2 3 4).(some 1 (appending what (3 4) (1 2 3 4))) The base rule doesn’t match because (3 4) and (1 2 3 4) differ. Therefore we look for the second rule, which assigns to what the value (h . t), to h the value 1, and to l — the value (2 3 4), and then tries to agree on (appending t (3 4) (2 3 4)). This time t receives the value (h’ . t’) and h’ receives the value 2, and we further ask about (appending t’ (3 4) (3 4)). We now see that the base rule matches with t’ being equal to (). This allows us to conclude that — since what is (h . t), t is (h’ . t’), h is 1, h’ is 2 and t’ is () — what must be (1 2).If we imagine that the direction “from arguments to the result” was “forward”, then the direction “form the result to arguments” could be called “backward”. This should be a hint for you how to interpret the phrase about “running programs backwards”: rather than getting a “result” (or “output”), we can obtain sets of arguments (or “inputs”) that would produce the desired result.Before be dare to interpret what would the phrase “running the evaluator backwards” mean, let’s try to be a bit more precise in describing how our reasoning system works.By now, we should be able to see that there are essentially two things going on here: searching through the available rules and matching variables with values, synthesizing and expanding our knowledge (or assumptions) along the way.One thing that we should be concerned with is how to deal with potentially infinite amount of results, and this is something that we’ll actually begin with.Let’s consider the sequence of natural numbers:[math]0, 1, 2, 3, 4, ...[/math]We can see a clear pattern here: each number can be constructed by adding 1 to the previous number. We could grasp this pattern in the following definitions:(define numbers-from  (function (n)  (cons n (numbers-from (+ n 1)))))  (define numbers (numbers-from 0)) Suppose now that we’d like to learn the value of(first numbers) According to the rules specified in the value function, to learn the value of function application, we must first learn the values of all arguments, which in this case means numbers.numbers are, by definition, the result of applyting the numbers-from function to the value 0, which is the value of (cons 0 (numbers-from 1)). To learn the value of that expression, we must first evaluate all the arguments. In particular, we need to learn the value of (numbers-from 1). By definition, it’s (cons 1 (numbers-from 2)). To learn the value of that expression, we must first evaluate all the arguments…See the problem? We will never learn the first element of the list, because the evaluation will go on forever (or at least until some computer runs out of memory, or someone pulls the plug).Although we could modify the evaluator to delay the evaluation of arguments until their value is actually needed (this strategy, known as “lazy evaluation”, is supported by default by some programming languages such as Haskell), we could also use a certain programming trick to represent infinite sequences: rather than returning a list, we could return a recipe for creating a list:(define $numbers-from  (function (n)  (function ()  (cons n  ($numbers-from (+ n 1))))))  (define $numbers ($numbers-from 0)) You may have noticed that we use the $ character as a prefix to the names of our objects. It suggests that these objects are potentially infinite sequences (or rather — recipes for generating sequences). They are slightly different to work with than lists — if we wish to get the first element of such sequence, we must first “follow the recipe”:(first ($numbers)) Note the additional pair of parentheses around the argument to first: we need them, because the $numbers object is not a list, but a function. It is a rather peculiar function, because it doesn’t take any arguments — its only purpose is to delay the computation. Applying “no arguments” to that function forces the delayed computation.Getting the second element of such infinite sequence is also a bit tricky. First we need to force the object to get a pair. Then, we need to force the tail of that pair, which will give us a new pair. Then it should suffice to take the head of that pair:(head ((tail ($numbers)))) You can see that it’s rather easy to make a mistake in the number of parentheses that you need. It may therefore be convenient to have a function that would convert some amount of initial elements of a stream to a list.(define take  (function (n stream)  (if (or (= n 0) (eq? stream '()))  '()  (if (pair? stream)  (cons (head stream)  (take (- n 1)  (tail stream)))  (take n (stream)))))) For example, (take 3 $numbers) would produce the list (0 1 2).There’s a lot of examples of beautiful programs that make use of infinite sequences, but since they have little to do with the merit of my answer, I won’t be showing them here.Now that we know how to deal with infinite sequences, we can get back to our implementation. As we noted before, the two essential things that our inference engine will be doing, are:searching through the available rulesmatching variables with values (synthesizing knowledge along the way)Probably the most intriguing part of the above description is this idea of “synthesizing knowledge”, so this is something that we’re going to begin with.Recall that we decided to represent our knowledge as “potentially incomplete trees”, i.e. trees that may contain holes (or variables). We can imagine a situation when there are two sources of information about the same thing, and the information from those two sources is partially overlapping. For example, one person might have heard that the car has been stolen, and other person might have heard that John stole something. Therefore we might deal with the following two bits of knowledge:(stole X 'the-car) (stole 'John Y) If we synthesize those two bits of information, we might conclude that(stole 'John 'the-car) (Of course intuitively this might be an invalid reasoning, but the point of this example is to show the process of knowledge integration, rather than the process of valid reasoning).It can also be the case that two bits of knowledge are contradictory or disjoint and cannot be integrated. For example, you cannot integrate those two pieces of knowledge:(stole 'John X) (broke 'John Y) The process of knowledge integration is, in some ways, similar to checking whether two LISP objects are equal?: if two pieces of knowledge are different symbols (or numbers), they cannot be integrated. If they are different variables (holes), they can be integrated only if those variables wouldn’t be bound to different symbols (or numbers). If one of them is a variable, and another is not, then we should be able to bind that variable to the value (which means that the variable must either be unbound, or it must be bound to something that can be integrated with the value). Otherwise, if they are both pairs, then we need to be able to integrate their heads and their tails.The historical name for this kind of knowledge integration is unification. Compared to equal?, it needs to take an additional argument that is going to represent bindings from variables to values or to other variables. Also, the return value of the function will either be false if the objects can’t be unified, or a list of bindings for variables that would allow to obtain a form that would integrate the knowledge from the two terms that are being unified.The bindings are going to be represented in the same structure that we used for representing environments in our evaluator — as a list of pairs. This allows us to re-use some functions that we’ve defined previously, like lookup. However, since we now consider a possibility that a symbol may be unbound, we need a way to know it:(define bound?  (function (symbol bindings)  (and (pair? bindings)  (or (eq? symbol  (head (head bindings)))  (bound? symbol  (tail bindings)))))) Now we can translate the logic described above to LISP:(define unify  (function (x y bindings)  (if (eq? bindings #false)  #false  (if (equal? x y)  bindings  (if (symbol? x)  (if (bound? x bindings)  (unify (lookup x bindings) y  bindings)  (cons (cons x y) bindings))  (if (symbol? y)  (unify y x bindings)  (if (and (pair? x)  (pair? y))  (if (or (is? x 'quote)  (is? y 'quote))  #false  (unify (tail x) (tail y)  (unify (head x) (head y)  bindings)))  #false))))))) We can now check that, say,(unify '('stole 'John X)   '('stole Y 'the-car) '()) results with((X . 'the-car) (Y . 'John)) (if you ever try to run this code in some actual LISP system, you might get something like ((X quote the-car) (Y quote John)), but note that — given what we said earlier about quote and the . — those are just two ways of writing the same thing)This should give us some idea how to synthesize knowledge. What’s left is the strategy for applying the rules to our query.Recall that, in our evaluator, we considered a program to be a sequence of definitions followed by a single expression. Here, similarly, we can imagine a program to be a sequence of rules followed by a single query (compare it with the definition of run).(define answer  (function (question assumptions)  (if (is? (head question) 'conclude)  (answer (tail question)  (cons (conclusion  (head question))  assumptions))  (take (second (head question))  (solutions  (quote-head  (third  (head question)))  assumptions))))) Similarly to run, the answer function simply converts the human-readable representation of a program into representation some internal representation that is easier to operate on. So for example, the expression(answer  '((conclude (apd () b b))  (conclude (apd (h . t) b (h . l))  from (apd t b l))  (some N (apd (1 2) (3 4) X)))  '()) is equivalent to(take N (solutions '('apd (1 2) (3 4) X)  '((('apd (h . t) b (h . l))  ('apd t b l))  (('apd () b b))))) The conclusion function drops the from symbol from conclude forms (if present), which allows to transform the human readable (conclude <consequence> from <conditions> …) to a more compact form (<consequence> <conditions> …) that will be used for internal processing. It also makes sure that the first symbol, which is used for denoting a relation, gets quoted, so that forms like (stole 'John X) become ('stole 'John X):(define conclusion  (function (form)  (cons (quote-head (second form))  (if (pair? (tail (tail form)))  (map quote-head  (tail  (tail  (tail form))))  '()))))  (define quote-head  (function (relation)  (cons (list 'quote (head relation))  (tail relation)))) The actual work is delegated to the solutions function. It needs to do the following:starting with empty knowledge, search through the rules to find the ones that can be helpful in answering our questionfor each such rule, unify the question with the consequence of that rule, getting a set of bindings (“pending knowledge”); subsequently, for each condition of the considered rule, solve it recursively, augmenting the knowledgeThe second part is actually a bit tricky: when we invoke solutions recursively, we get a (potentially infinite) sequence of bindings, which represent alternative conclusions (or “knowledges”, if you please). For example, suppose that we have a rule with three conditions. Unification with the consequence of that rule produces one initial candidate for solution (one set of bindings). Suppose that applying this candidate recursively to the first condition gives us two updated candidates for solutions. So then, if we want to apply the second condition, we need to apply it to both new candidates. As a result, we’ll get two lists of candidates for solutions. Before applying them to the third condition, we need to merge them (or “append”, if it rings the bell) into a single list.It could be tempting to use the append function that we’ve defined before. However, it won't work, because it is only capable of working with lists, and not lazy streams. But there is also a more subtle issue here: imagine that you have two streams, and you want to combine them. They are both potentially infinite, so if you want to be able to access elements of each, you cannot “first enumerate all the elements of the first stream, and then all the elements of the second”, because you'll never be done with enumerating the first stream (it's infinite!).What you need to do instead is interleave the elements from both streams.This is how it could be done:(define interleave  (function (stream-a stream-b)  (if (eq? stream-a '())  stream-b  (if (pair? stream-a)  (cons (head stream-a)   (interleave (tail stream-a)  stream-b))  (function ()   (interleave stream-b  (stream-a))))))) The interleave function isn’t the whole story, though. Suppose that, having some set of partial solutions, we wish to expand them further, i.e. apply them recursively and merge (interleave) the results. Of course, if the set is empty, we have nothing left to do (and hence also return an empty set).Otherwise the set can either be a stream or a list. If it is a list, we interleave the result of applying the function to the first element of the list with the expansion of the function on the tail of the list.I admit that I don’t fully understand why streams are handled the way they are here, but the idea here is that we return a new stream which — when evaluated — produces the expansion of the elements of the original stream:(define expand  (function (f elements)  (if (eq? elements '())  '()  (if (pair? elements)  (interleave  (f (head elements))  (expand f (tail elements)))  (function ()  (expand f (elements))))))) (If you’re somewhat familiar with functional programming, you may think of expand as a variant of flat-map function. If you’re not, suppose that you have a function f that takes a natural number and returns a list with that number repeated that number times, for example (f 0) produces (), (f 1) — (1), (f 2) — (2 2), (f 3) — (3 3 3) and so on. Then, (flat-map f ‘(1 2 3)) would produce the list (1 2 2 3 3 3). The expand function is similar, but it can operate on infinite streams and the order of the elements in the resulting list can be thought of as “unspecified”.)We now know (hopefully) how to merge the results of a recursive call on a condition of a rule to update those results. We’re going to repeat this process for all conditions of a rule.This pattern of repeating some operation over some set of data is very common in programming, even though in this presentation it may seem as appearing out of the blue. Some programmers call it “reduction” or “folding”. Speaking algebraically, if [math]\circ[/math] is a binary function, then[math]fold \circ, [x_1,x_2,…,x_n] = x_1 \circ x_2 \circ … \circ x_n[/math].The problem with this formulation is that it does not specify the order in which the [math]\circ[/math] function is applied, and that the value for empty list is unspecified. We could be more specific about the order of operations — the two most obvious candidates are “left-to-right” and “right-to-left”. We can also add additional argument to serve as the default value for the empty list. This leaves us with the following formulations:[math]fold_{left} \circ, e, [x_1,x_2,…,x_n] = (…((e \circ x_1) \circ x_2)\circ … \circ x_n)[/math][math]fold_{right} \circ, e, [x_1,x_2,…,x_n] = (x_1 \circ (x_2 \circ … \circ (x_n \circ e)…))[/math]Here I’m only going to define the left-to-right variant of [math]fold[/math], although the order in which we’ll be processing the conditions for rules shouldn’t actually matter (if you want, you can try to define the right-to-left variant as an exercise):(define fold-left  (function (op e l)  (if (pair? l)  (fold-left op   (op e (head l))  (tail l))  e))) If you're familiar with some popular programming languages like Python, you can think of(fold-left f init items) as an equivalent of a simple for loop wit the following structure:def fold_left(f, init, items):  for x in items:  init = f(init, x)  return init (if you don't have any experience with programming languages like this, you can safely ignore this remark)In our logic engine, we’re also going to need to be able to consider only those rules, whose consequence matches the query being processed. Selecting only those elements of a list which satisfies some desired criterion is also a very common pattern in programming:(define only  (function (satisfying? elements)  (if (pair? elements)  (if (satisfying? (head elements))  (cons (head elements)  (only satisfying?  (tail elements)))  (only satisfying?  (tail elements)))  '()))) Reading those definitions may not be a very pleasant experience for you. It can give you some insight into the process of programming, though. Probably every advanced programmer would be able to write these definitions by heart. They may seem very abstract, but they are also very convenient. You can’t deduce their convenience from the definitions alone, though, which is why it is often more instructive to see examples.So let’s suppose that even? is a function which evaluates to true if its argument is an even number (divisible by 2), and to false otherwise. Even if you didn’t understand the definition of only, you could probably guess that(only even? '(1 2 3 4 5 6)) produces the list (2 4 6).(Good programmers make a habit of inventing such names for things that facilitate making good guesses. The traditional name for only is filter, but — other than being traditional — it's a rather bad name. The name map used before is also traditional, and it’s also a bad name, but I didn't manage to come up with anything better so far. Maybe you will.)Before we move on to the actual implementation of our logic engine, there’s still one issue waiting to be resolved, namely — that of the identity of holes.Suppose that we have the following rule:(conclude (same X X)) Here there are two instances of the X variable in the rule’s consequence — and clearly they are meant to refer to the same object.If we consider another rule,(conclude (green X) from (plant X)) then the X variable from the conclusion is meant to be the same as the one in the condition. But we don’t want this variable to be (in principle) the same as the one used in the previous rule. Even though in both cases we used the name X, we could have used, say variable X1 in the first rule, and variable X2 in the second rule (ore something like that).Moreover, if — during our question answering process — we’ll need to use the same rule more than one time, then we don’t want the same variable to bind two different objects (because that wouldn’t unify!).For this reason, prior to applying a rule, we want to rename all the variables that appear within that rule, using a name which is guaranteed to be unique. For this purpose, let’s assume that we have a primitive function — fresh-symbol — which, each time it’s evaluated, would generate a new symbol.(Actually calling such thing “a function” might be considered an abuse by some. We consider it “primitive”, because we wouldn’t be able to define such operation using the LISP language that we’ve described before.)In order to guarantee uniqueness, we might restrict our language, for example, to forbid identifiers which contain the ~ character in variable names. Then, subsequent invocations of, say, (fresh-symbol ‘x) could produce values like x~1, x~2, x~3 and so on.If we want to preserve the identity of symbols, we must make sure that all instances of a particular symbol are substituted with the same fresh symbol. To do so, we can apply the following procedure:extract all the variables that appear within a rulegenerate fresh names for each of those variablessubstitute variables with fresh names throughout the ruleTo be able to extract variables, we need to be able to treat lists as sets, rather than sequences.A set is something that either has some element, or hasn't. That's all that a set is. For example, the lists (a b c) and (c a b) can represent the same set. Of course, the essential relation is that of having an element.An empty set does not have any element. A non-empty set has some particular element either if that element is its head, or if it is in its tail:(define in?  (function (element set)  (and (pair? set)  (or (eq? element (head set))  (in? element  (tail set)))))) (compare it with the definition of bound?)Given two sets, you can construct their union (a set containing all the elements from both these sets) and their intersection (a set containing elements that are in both these sets). For our purpose, the union operation should be sufficient (you can think how you'd implement intersection by yourself).The code for calculating union is very similar to append. The only difference is that, before appending, we check whether a particular element is already in the list being appended to:(define union  (function (a b)  (if (pair? a)  (if (in? (head a) b)  (union (tail a) b)  (cons (head a)  (union (tail a) b)))  b))) Equipped with union, we can extract variables from an expression rather easily — recall that in our system a variable is a symbol that is not quoted:(define variables  (function (expression)  (if (symbol? expression)  (list expression)  (if (or (not (pair? expression))  (is? expression 'quote))  '()  (union  (variables (head expression))  (variables (tail expression))) )))) In order to reuse functions that operate on bindings, such as lookup or bound?, we can generate fresh names wrapped in our familiar binding structure (list of pairs):(define fresh-names  (function (variables)  (map (function (variable)  (cons variable   (fresh-symbol   variable)))  variables))) This makes variable renaming rather straightforward:(define substitute  (function (expression bindings)  (if (and (symbol? expression)  (bound? expression   bindings))  (lookup expression bindings)  (if (or (is? expression 'quote)  (not (pair? expression)))  expression  (cons (substitute (head expression)  bindings)  (substitute (tail expression)  bindings)))))) Making variables that appear in rules independent from other variables appearing in other rules (or other instances of the same rule) is now rather easy:(define independent  (function (rule)  (substitute   rule   (fresh-names (variables rule))))) Now that we have all the pieces of the puzzle (or, as I prefer to think, sufficiently elaborate vocabulary), we can put them together.Let’s recall what we wrote above:The actual work is delegated to the solutions function. It needs to do the following:starting with empty knowledge, search through the rules to find the ones that can be helpful in answering our questionfor each such rule, unify the question with the consequence of that rule, getting a set of bindings (“pending knowledge”); subsequently, for each condition of the considered rule, solve it recursively, augmenting the knowledgeIf your impression is that we’re repeating the same thing over and over again, you’re absolutely right. Programming requires focus, and to attain focus, programmers need to recall themselves what they are actually doing. Moreover, programming is repeating the same thing over and over again, albeit in a different language.So let’s now try to translate this logic using the notions that we managed to elaborate:(define solutions*  (function (query rules knowledge)  (if (eq? knowledge #false)  '()  (expand  (function (rule)  (fold-left  (function (knowledges  condition)  (expand   (function (knowledge)  (function ()  (solutions*  condition   rules   knowledge)))  knowledges))  (list (unify query  (head rule)  knowledge))  (tail rule)))  (map independent  (only (function (rule)  (unify query  (independent  (head rule))  knowledge))  rules)))))) Admittedly, this code can be a bit difficult to read — the things that appear last should actually be read first (call of the only function). This is not how I would normally write this code.But it’s not how the code is written that is the main source of difficulty. It takes time to get familiar with functions like fold; the way of composing recursive solutions also requires getting used to. I made a lot of mistakes on the way of arriving at this function. I have been testing it on my computer along the way, and I eventually made it work.I believe that similifying this one function, and understanding its different aspects, could form a whole research field on its own, so don’t feel intimidated if you find it difficult to understand.The form of the results from the solutions* function is far from satisfactory, though, and can be simplified rather easily. We can see this if we run it for some example data:(take 1   (solutions*   '('appending P Q (1 2 3 4))  '((('appending () Y Y))  (('appending (X . T) Y (X . L))   ('appending T Y L)))  '())) The results are rather cryptic:(((Y~52 . (1 2 3 4))   (Q . Y~52)   (P . ()))) We see that a part of our result is some transient symbol Y~52, even though we only asked for P and Q. Therefore we need to process the result further, and extract the final values from the chain of bindings that were constructed during repeated unification.Our extraction routine must account for the case when some unified object is a list containing some variables. In such case we want to extract final value of some variable (if there are cyclic references in bindings, we’re screwed). A value is final when it is not bound:(define extract  (function (value bindings)  (if (and (symbol? value)  (bound? value bindings))  (extract (lookup value bindings)   bindings)  (if (pair? value)  (cons (extract (head value)   bindings)  (extract (tail value)   bindings))  value)))) In our final result, we’re only interested in the values of variables that were present in the original query. So we can wrap the call to solutions* to perform such extraction:(define solutions  (function (query rules)  ($map (lambda (result)  (map (lambda (variable)  (cons variable  (extract  variable  result)))  (variables query)))  (solutions* query rules '())))) where $map is a variant of map which is able to work with streams:(define $map  (function (f $)  (if (eq? $ '())  '()  (if (pair? $)  (cons (f (head $))  ($map f (tail $)))  (function ()  ($map f ($))))))) Now the result is much easier to decode:(take 5  (solutions   '('appending P Q (1 2 3 4))  '((('appending () Y Y))  (('appending (X . T) Y (X . L))   ('appending T Y L))))) produces(((P . ()) (Q . (1 2 3 4)))  ((P . (1)) (Q . (2 3 4)))   ((P . (1 2)) (Q . (3 4)))  ((P . (1 2 3)) (Q . (4)))  ((P . (1 2 3 4)) (Q . ()))) This gives us all the possible arguments to the append function that would produce the list (1 2 3 4). From the perspective of LISP, this means that we managed to reverse the order of execution — rather than calculating the result, we treat the result as given and “run our program in the opposite direction”.But the possibilities of our language are even bigger: we can write queries like:(some 3 (appending x y z)) which could be interpreted as: “what are some possible relations between argments to appending and its result?”.The detective can respond to this query in the following way:(((x . ())  (y . b~49)   (z . b~49))   ((x . (h~50))  (y . b~102)  (z . (h~50 . b~102)))   ((x . (h~50 h~103))  (y . b~155)  (z . (h~50 h~103 . b~155)))) As you can see, there are some weird symbols appearing in the result, like b~49 or h~103. These symbols aren’t quoted, so they are variables. They stand for any value, so in a sense, such result represents infinite amount of results. Some people say that these values are unreified (because “to reify” means “to make a thing out of something”, from the Latin word “re”, which means “a thing”). You should be impressed that our language is capable of expressing such concept.Running the evaluator backwards — the first attemptBy now we should have a fairly good understanding of what an evaluator is, and what it means to run a function backwards. It should therefore be fairly straightforward to guess what the phrase “running the evaluator backwards” might mean: just as we managed to translate the append function so that it became understandable to our “detective”, we are now also going to try to translate the value function and its friends in a similar fashion.Now, from the course of my presentation this might seem like an obvious idea. As I wrote earlier, the evaluator was invented in the late 1950s (if we discount Turing’s ideas). The “detective-style programming” was developed in the early 1970s. Both ideas were very well known to programmers who were dealing with the field called “Artificial Intelligence” — which is a field that tends to attract great minds. They were both presented in MIT’s introductory course to Computer Science for almost three decades since the mid 1980s.Yet — to my knowledge — it wasnt’t until the second decade of the XXI century that someone (namely: Daniel Friedman and William Byrd) actually merged those two ideas to obtain a working program and started exploring its consequences (the first presentation of this idea that I know appeared around 2013).Let's recall the code of our evaluator.Its heart consisted of the value function, which dispatched on the possible type of the expression being evaluated (to make the code simpler, I removed the parts responsible for processing the and and or forms):(define value   (function (exp env)  (if (symbol? exp)  (value (lookup exp env) env)  (if (is? exp 'quote)  (second exp)  (if (is? exp 'if)  (if (value (second exp) env)  (value (third exp) env)  (value (fourth exp) env))  (if (is? exp 'function)  exp  (if (pair? exp)  (applied (value (head exp) env)  (map (function (arg)  (value arg env))  (tail exp))  env)  exp))))))) We'd like to translate this program to our ‘detective language’. However, you can observe a few problems with this endaevour. First, it uses the symbol? function, which cannot be expressed in the detective language, which only allows us to pattern-match, either on literals (like particular symbols or numbers) or on structures. So while we can easily convert the transformation of quote, if and function, namely(conclude (valuation ('quote x) env x))  (conclude  (evaluates ('if test then else)  env  result)  from (evaluates test env #true)  (evaluates then env result))  (conclude  (evaluates ('if test then else)  env  result)  from (evaluates test env #false)  (evaluates else env result))  (conclude  (evaluates ('function args . body)  env  ('function args . body))) there is no way for us write a rule that would only apply if a given argument was only a symbol.A similar problem concerns the application. While we can emulate the pair? function in our language rather easily (by unifying an object with a (head . tail) pattern), this is insufficient for our purpose: the rule for application doesn’t only check whether a given expression is a pair, but it does so in the circumstances which rule out the head of that pair being any of the symbols quote, if and function (as well as and and or in the original evaluator).But, while we can write rules that are satisfied if a given subexpression is a particular symbol, we have no way of saying that a rule should be satisfied if a given variable is not a particular symbol (or set of symbols).The art of saying “no”It would therefore be desirable to extend our language with a capability for expressing such restrictions, so that we would be able to write down our rule as:(conclude  (valuation (operator . operands)  env  result)  from  (valuation operator env procedure)  (valuations operands env arguments)  (application procedure arguments  env result)  (differ operator 'quote)  (differ operator 'function)  (differ operator 'if)) where differ is a special predicate which succeeds only if its arguments do not unify.Likewise, we would like to be able to verify whether a given argument is a symbol, regardless of its actual value:(conclude  (valuation variable  ((variable . value) . _)  value)  from  (symbol variable))  (conclude  (valuation variable   ((another-variable . _)  . env)  value)  from  (valuation variable env value)  (symbol variable)  (differ variable another-variable)) Let’s now think for a minute how we could implement such extension.We would like to be able to extend our set of rules with a bunch of “magic rules” like differ or symbol. These rules would either succeed or fail (or they could be unresolved in that we could have insufficient information to know whether they succeed or fail).It should be easy to see that there are actually two questions here:How should the general mechanism for writing such magic rules be organized?How should the particular magic rules (i.e. differ and symbol) be implemented?Let’s begin with the first question. We’re going to need to modify the solutions* and its representation of knowledge: so far, we have only been using bindings, but now we’d also like to include a list of constraints.The most straightforward way of doing that is to store a pair whose head is a list of bindings, and whose tail is a list of constraints.Technically, we could just use the cons function to construct that pair. However, this would obscure our intent. Moreover, if bindings are #false, there's no need to construct the pair, because there are no values to be constrained. So, instead of cons, we're going to use the make-knowledge function:(define make-knowledge  (function (bindings constraints)  (if (eq? bindings #false)  #false  (cons bindings constraints)))) Accordingly, we need to be able to access both bindings and constraints from our knowledge representation:(define knowledge-bindings  (function (knowledge)  (if (eq? knowledge #false)  #false  (head knowledge))))  (define knowledge-constraints  (function (knowledge)  (if (eq? knowledge #false)  #false  (tail knowledge)))) Of course, it could be the case that some piece of knowledge (or rather hypothesis) is inconsistent, in the sense that the bindings do not satisfy some of the constraints, and therefore we’d like to be able to check for this consistency.As we said before, a constraint is a function whose value could either be #true, which would mean that a constraint is satisfied, #false, which means that a constraint is violated, or some other value, which means that we don’t yet have sufficient information to decide.The last case is easy to handle — we just keep the constraint intact. If a constraint is violated, then we can immediately say that our hypothesis is false. But if the result is #true, then we know that the constraint is satisfied, and therefore we can remove it from our set of constraints.Therefore, we need a function that takes a constraint, the outcome of applying this constraint on a given set of bindings, and the pair expressing the knowledge being verified. Then, the above logic could be translated as:(define verified  (function (constraint   outcome  knowledge)  (if (eq? outcome #false)  #false  (if (eq? outcome #true)  (make-knowledge  (knowledge-bindings  knowledge)  (only (function (c)  (not  (eq? c  constraint)))  knowledge))  knowledge)))) This allows us check for consistency of knowledge in the following way:(define consistent  (function (knowledge)  (fold-left  (function (knowledge   constraint)  (if (eq? knowledge   #false)  #false  (verified  constraint  (satisified   knowledge  constraint)  knowledge)))  knowledge  (knowledge-bindings  knowledge)))) where the satisfied function applies the constraint to knowledge. Its exact definition would depend on how we decide to represent constraints.Let’s recall that we have decided to represent regular rules as a list of predicates (with quoted heads), for example(('appending () Y Y)) or(('appending (X . T) Y (X . L))  ('appending T Y L)) or in general,(<consequence> <conditions> ...) where <conditions> is a (possibly empty) list.We could represent constraints as pairs:(<consequence> . <verifier>) where <verifier> is a LISP function that verifies the constraint. It is safe to assume that it takes two arguments: the <consequence> form, which allows to specify which arguments are of concern to us, and a set of bindings that are meant to be tested against the constraint.Given this representation, the satisfied function should just take the <verifier> and apply it to the <consequence> and to the bindings from a given knowledge:(define satisfied  (function (constraint   knowledge)  ((tail constraint)   (head constraint)  (knowledge-bindings   knowledge)))) (If you find this definition puzzling, you should give it a thought. However, it might be helpful to read on, until some actual definitions of constraints appear — it’s usually easier to think about concrete examples than abstract definitions.)Now that we have an idea how to represent and use constraints, we need to integrate this idea into our inference engine.Let’s recall the definition of the solutions* function:(define solutions*  (function (query rules knowledge)  (if (eq? knowledge #false)  '()  (expand  (function (rule)  (fold-left  (function (knowledges  condition)  (expand   (function (knowledge)  (function ()  (solutions*  condition   rules   knowledge)))  knowledges))  (list (unify query  (head rule)  knowledge))  (tail rule)))  (map independent  (only (function (rule)  (unify query  (independent  (head rule))  knowledge))  rules)))))) We’re going to modify it to account for constraints (the modified version will be called solutions&, because the & character looks like a bond, which is something that constraints).There will be the following differences:we need to treat magic rules/constraints differently than regular rulesupon recursive call, we want to filter out all the solutions that are inconsistentin many places where we used knowledge before (which previously meant “bindings”), we’re now going to use (knowledge-bindings knowledge) or something even more complicated, and where we have returned bindings, we’re going to need to make-knowledgeThere’s one remark concerning the “filtering-out” (p. 2). The list of answers obtained from the recursive call is potentially infinite, so we’d need a variant of the only function that would be able to operate on streams. However, even the only function wouldn’t be satisfactory to us, because the consistent function might remove some of the constraints (the ones for which the verifier function returned #true). Therefore we need to combine the capabilities of the only function and the map function, as well as the ability to operate on infinite streams.It turns out that we already have such function — we have called it expand. The only difference is that it doesn’t accept a function that could return #false, but it accepts functions that produce lists. Therefore, we could take the result of the consistent function, and if it’s #false, return the empty list, and otherwise return a single-element list containing the result:(define listed  (function (value)  (if (eq? value #false)  '()  (list value)))) So, here’s the definition of solutions&:(define solutions&  (function (query rules knowledge)  (if (not knowledge)  '()  (expand  (function (rule)  (if (procedure? (tail rule))  (constrained query   knowledge   rule)  (fold-left  (function (knowledges  condition)  (expand  (function (knowledge)  (lambda ()  (solutions&  condition  rules  knowledge)))  knowledges))  (listed  (consistent  (make-knowledge  (unify  query  (head rule)  (knowledge-bindings  knowledge))  (knowledge-constraints  knowledge))))  (tail rule))))  (map independent  (only  (function (rule)  (unify  query  (independent  (head rule))  (knowledge-bindings   knowledge)))  rules)))))) As in the case of solutions*, I arrived at this definition by the way of trial and error. I’d like to turn your attention to the following subexpression: (if (procedure? (tail rule))  (constrained query   knowledge   rule)  (fold-left  ...) It replaces the direct call to fold-left from solutions*. The constrained function is defined as:(define constrained  (function (query knowledge rule)  (consistent  (make-knowledge  (unify query (head rule)  (knowledge-bindings  knowledge))  (cons rule  (knowledge-constraints  knowledge)))))) which I hope is at this point straight-forward enough that I don’t need to explain it.Now that we have some idea about the general mechanism of using constraints, we can answer the second question that we asked at the beginning of this section, namely — how to implement the differ and symbol constraints.Let’s begin with symbol. The rule itself could be obtained from evaluation of the expression:(cons '('symbol X)  (function (pattern bindings)  ???)) where the ??? is to be filled by us shortly.The pattern argument is going to have a value like ('symbol X~567) during function application. Of course, the 'symbol element isn't particularly interesting to us, and we care more about the second element of pattern.We should obtain its value (using the extract function that we wrote before) and see if it is:a symbol (variable), which means that the value is not reified yet, so we can’t decide whether the constraint is satisfied or violateda list of two elements, whose first element is the quote symbol, and whose second element is a symbol, which means that the constraint is satisfiedanything else means that the constraint is violatedTranslating this logic to Lisp, we could write it as:(define check-symbol  (function (value)  (if (symbol? value)  'unknown  (if (and (pair? value)  (pair? (tail value))  (eq? (tail (tail value))  '())  (eq? (head value)   'quote))  #true  #false)))) This definition allows us to complete our constraint:(cons '('symbol X)  (function (pattern bindings)  (check-symbol  (extract (second pattern)  bindings)))) The implementation of disequality constraint is going to have a similar form. The rule for our constraint might be obtained from evaluation of the expression(cons '('differ X Y)  (function (pattern bindings)  ???)) It should be easy to see that this time we’ll be interested in the second and the third element of pattern.However, the way we are going to check for equality of those elements may not be obvious at first: we’re going to use the unify function, but this time, we’re going to interpret the results differently:if the patterns fail to unify, then it means that they cannot be made equal under any circumstances, which means that the constraint is certainly satisfiedif the patterns unify, then the interpretation depends on the result:if the bindings resulting from the application of unify are exactly the same as the bindings passed to that function, then it means that the given bindings already violate the constraintotherwise, if they are different, it means that there are some extra conditions that would need to be satisfied in order to violate the constraint.Translating this logic to Lisp, we get:(define check-disequality  (function (unified bindings)  (if (eq? unified #false)  #true  (if (eq? unified bindings)  #false  'unknown)))) So, we could write down the disequality constraint rule as:(cons '('differ X Y)  (function (pattern bindings)  (check-disequality  (unify (second pattern)  (third pattern)  bindings)))) It should make sense to gather these rules in one place, and pass it to the answer function instead of empty assumptions:(define fundamental-assumptions  (list  (cons '('symbol X)  (function (pattern bindings)  (check-symbol  (extract (second pattern)  bindings))))  (cons '('differ X Y)  (function (pattern bindings)  (check-disequality  (unify (second pattern)  (third pattern)  bindings)))))) We also need to adapt our interface, i.e. the solutions function, to be able to process our new representation of knowledge. It is a relatively simple task, even though the resulting code is rather lengthy — instead of a list of bindings, the result is now a list of pairs of bindings and constraints:(define solutions  (function (query rules)  ($map  (lambda (result)  (cons  (map (lambda (variable)  (cons   variable  (extract variable  (knowledge-bindings  result))))  (variables query))  (map  (lambda (constraint)  (cons   (second  (head   (head constraint)))  (map (lambda (variable)  (extract variable  (knowledge-bindings  result)))  (tail   (head constraint)))))  (knowledge-constraints   result))))  (solutions& query rules   (make-knowledge  '() '()))))) I admit that this function is barely readable in this form. This isn’t very important from our point of view, because it is only responsible for displaying the results. (Perhaps it could be rewritten to a more digestible form.)We can test the result using some sample rules:(answer   '((conclude   (distinct-symbols X Y)  from   (differ X Y)   (symbol X)   (symbol Y))  (some 1 (distinct-symbols A B)))  fundamental-assumptions) produces ((((A . X~215) (B . X~221)) (symbol X~221) (symbol X~215) (differ X~215 X~221))), whereas(answer   '((conclude   (distinct-symbols X Y)  from  (differ X Y)   (symbol X)   (symbol Y))  (some 1 (distinct-symbols 'A B)))  fundamental-assumptions) yields ((((B . X~247)) (symbol X~247) (differ 'A X~247))), but(answer   '((conclude   (distinct-symbols X Y)  from   (differ X Y)   (symbol X)   (symbol Y))  (some 1 (distinct-symbols 'A 'B)))  fundamental-assumptions) gives ((())), and(answer   '((conclude   (distinct-symbols X Y)  from   (differ X Y)   (symbol X)   (symbol Y))  (some 1 (distinct-symbols A A)))  fundamental-assumptions) produces ().We have only defined two types of constraints, but it may turn out that we’re going to need some more. We are prepared for this situation, though, because our framework is flexible enough to allow us to define new constraints (even though they are extra-linguistic from the point of view of our detective).Running the evaluator backwards — the second attemptNow that our language is advanced enough to express various constraints, translating the meta-circular evaluator to our detective language should be a relatively straightforward task.Let’s begin by translating the run function. Our counterpart will be called reduces:(conclude   (reduces (('define name value). rest)  env result)  from  (reduces rest ((name . value) . env)   result))  (conclude  (reduces (last-expression) env result)  from  (evaluates last-expression env result)) The core of the evaluator is the evaluates function, which corresponds to the value function. We need to transform all possible types of expressions that we might have to deal with, i.e. quote, if, function, variables, function application and literals.The case of quote is very simple — we simply give back the quoted term:(conclude  (evaluates ('quote literal) env literal)) The case of if is slightly more complicated: we need to evaluate the test of an expression, and if it evaluates to #false, then produce the value of the “else” branch, and otherwise — the value of the “then” branch:(conclude  (evaluates ('if test then else)   env result)  from  (evaluates test env #false)  (evaluates else env result))  (conclude  (evaluates ('if test then else)  env result)  from  (evaluates test env test-result)  (differ test-result #false)  (evaluates then env result)) Functions are self-evaluating:(conclude  (evaluates ('function args . body) env  ('function args . body))) In case of variables, we need to look up the value of the variable in the environment:(conclude  (evaluates key ((key . value) . _) value)  from  (symbol key))  (conclude  (evaluates key ((other-key . _) . env) value)  from  (symbol key)  (symbol other-key)  (differ key other-key)  (evaluates key env value)) The case of function application is a bit tricky: we need to evaluate operator and operands recursively and then apply the resulting function to the resulting arguments. But, we must make sure that operator is neither quote nor function nor if:(conclude  (evaluates (operator . operands) env result)  from  (differ operator 'quote)  (differ operator 'function)  (differ operator 'if)  (evaluates operator env function)  (evaluate operands env arguments)  (application function arguments env result)) where evaluate produces a list of evaluated arguments(conclude  (evaluate (first . rest) env (first* . rest*))  from  (evaluates first env first*)  (evaluate rest env rest*))  (conclude  (evaluate () env ())) and application extends the environment with the appropriate values and reduces the given sub-program:(conclude  (application ('function (arg . args) . body)  (val . vals) env result)  from  (application ('function args . body) vals  ((arg . val) . env) result))  (conclude  (application ('function () . body) ()  env result)  from  (reduces body env result))  (conclude  (application ('function last-arg . body)  vals env result)  from  (symbol last-arg)  (reduces body ((last-arg . vals) . env)  result)) Literals (like numbers) are self-evaluating, so they could be handled using the following rule:(conclude  (evaluates value env value)  from  (literal value)) which would of course add another type of constraints that would hold if a given object is a literal value.So that’s it. This is how “the most impressive code I’ve seen” looks like. To be more specific, this is my attempt to rewrite this code. If you’re not sure how you’re supposed to feel after reading it, here’s a hint:Why’s that important?While the above code doesn’t itself offer too many capabilities, it could be extended (for example, with operations like cons, head and tail, or some operations on numbers) that would allow to generate programs which give some particular results, or have some particular properties.So, instead of using computers to execute our programs, we can use them to generate some programs that have some desired properties.I’ve learned about this technique from Dan Friedman and Will Byrd, who presented it in 2012 during a Clojure/Conj conference. The recorded talk is available on Youtube (you should watch at least the first 20 minutes to see the potential of that idea):Will Byrd has applied this technique to develop an experimental editor that is able to synthesize programs from some hints that programmers give to it, as he explained in this talk:The SummaryThe philosopher Ludwig Wittgenstein said once:the limits of my language mean the limits of my worldWhat I like about the example that I have presented above is that it shows the “perennial value” of Lisp, which makes it very easy to design new languages (like “the detective language”), which in turn is an amplifier for the mind that allows you to shift the limits of your world.

What dead or fictional famous person would be a great Quora contributor? Whether it be a president or character from film/literature, who could contribute great content? Why would that person be awesome on Quora?

DFW.This guy, not the airport:It's wild that many people in Silicon Valley don't recognize those initials. The other day I had to explain Infinite Jest to one of the smartest and most influential people I know. It was the first time he'd heard of the book.We could've used this literary titan, a self-deprecating genius who could tackle any subject, and he could've used us.David Foster Wallace (author):Spoke our language. The Internet sounds like David Foster Wallace.[1]Delighted in asking and answering questions. Like most of us, DFW had a day job — fiction, trying to tackle The Pale King — but was also a master essayist. Nonfiction was his plaything.[2] Quora could've been his playground.Snarked. The publication: Gourmet, a prestigious food and wine magazine. His assignment: The annual Maine Lobster Festival. His product: 7,000 words on the ethics of boiling a creature alive in order to enhance the consumer's pleasure, complete with discussion on the crustacean's sensory system.[3] Readers and advertisers were undoubtedly surprised by the unusual angle.Boasted expertise in a range of heavy-hitting topics.[4] He was so much more than Literature and Writing.Hailed from the Midwest. As in, outside Silicon Valley.Out-grammared everyone.[5]Shepherded the grammatically feeble. That is, he taught lower-level English classes.[6] Quora could've used his help editing.Knew the ins and outs of footnotes.Spoke eloquently and candidly of Depression and Loneliness.[7] I suspect many among us visit these topics during dark times, hoping to find words from another soul in the trenches.[8]Was an addict. A tasteless point to make, perhaps, but upvotes and follows and edits can be habit-forming.The next best thing would be Quora hosting DFW's friend and fellow writer, Jonathan Franzen. Getting him on here would be victory greater than getting Obama on Reddit (August 2012), for Franzen is not a friend of the internet: "It's doubtful," he said, "that anyone with an internet connection at his workplace is writing good fiction."[9]_________[1] To the Internet's detriment, argues Maud Newton in The NYT Magazine (2011):[In an essay] Wallace speaks of “the whole cynical postmodern deal” and “the whole mainstream celebrity culture,” and concludes that “the whole thing sucks.” ... “whole” appears 20 times in the essay, so frequently that it begins to seem not just sloppy and imprecise but argumentatively, even aggressively, disingenuous. At their worst these verbal tics make it impossible to evaluate his analysis; I’m constantly wishing he would either choose a more straightforward way to limit his contentions or fully commit to one of them.Of course, Wallace’s slangy approachability was part of his appeal, and these quirks are more than compensated for by his roving intelligence and the tireless force of his writing. The trouble is that his style is also, as Dyer says, “catching, highly infectious.” And if, even from Wallace, the aw-shucks, I-could-be-wrong-here, I’m-just-a-supersincere-regular-guy-who-happens-to-have-written-a-book-on-infinity approach grates, it is vastly more exasperating in the hands of lesser thinkers. In the Internet era, Wallace’s moves have been adopted and further slackerized by a legion of opinion-mongers who not only lack his quick mind but seem not to have mastered the idea that to make an argument, you must, amid all the tap-dancing and hedging, actually lodge an argument.Visit some blogs — personal blogs, academic blogs, blogs associated with some of our most esteemed periodicals — to see these tendencies writ large.full article: http://www.nytimes.com/2011/08/21/magazine/another-thing-to-sort-of-pin-on-david-foster-wallace.html?pagewanted=all[2] DFW in an interview with Tim Scocca (1998):I'll be honest. I think of myself as a fiction writer. I'm real interested in fiction, and all elements of fiction. Fiction's more important to me. So I'm also I think more scared and tense about fiction, more worried about my stuff, more worried about whether I'm any good or not, or I'm on the wrong track or not.Whereas the thing that was fun about a lot of the nonfiction is, you know, it's not that I didn't care, but it was just mostly like, yeah, I'll try this. I'm not an expert at it. I don't pretend to be. It's not particularly important to me whether the magazine, you know, even takes the thing I do or not. And so it was just more, I guess the nonfiction seems a lot more like play.full transcript: http://www.slate.com/content/slate/blogs/scocca/2010/11/22/i_m_not_a_journalist_and_i_don_t_pretend_to_be_one_david_foster_wallace_on_nonfiction_1998_part_1.html[3] From DFW's "Consider the Lobster," Gourmet magazine (2004):Up until sometime in the 1800s, [lobster] was literally low-class food, eaten only by the poor and institutionalized. Even in the harsh penal environment of early America, some colonies had laws against feeding lobsters to inmates more than once a week because it was thought to be cruel and unusual, like making people eat rats. One reason for their low status was how plentiful lobsters were in old New England. “Unbelievable abundance” is how one source describes the situation, including accounts of Plymouth pilgrims wading out and capturing all they wanted by hand, and of early Boston’s seashore being littered with lobsters after hard storms—these latter were treated as a smelly nuisance and ground up for fertilizer. There is also the fact that premodern lobster was often cooked dead and then preserved, usually packed in salt or crude hermetic containers. Maine’s earliest lobster industry was based around a dozen such seaside canneries in the 1840s, from which lobster was shipped as far away as California, in demand only because it was cheap and high in protein, basically chewable fuel.Now, of course, lobster is posh, a delicacy, only a step or two down from caviar. The meat is richer and more substantial than most fish, its taste subtle compared to the marine-gaminess of mussels and clams. In the U.S. pop-food imagination, lobster is now the seafood analog to steak, with which it’s so often twinned as Surf ’n’ Turf on the really expensive part of the chain steak house menu.full text: http://www.gourmet.com/magazine/2000s/2004/08/consider_the_lobster[4] A list of Quora topics David Foster Wallace would've owned[a]:Philosophy. His professors at Amherst College considered him a "rare philosophical talent, an exceptional student who combined raw analytical horsepower with an indefatigable work ethic. He was thought, by himself and by others, to be headed toward a career as a professor of philosophy."[b]Mathematics and Logic (philosophy). He focused on both in college and, much later, published "Everything and More," a nonfiction book that tells the story of ∞ in a breathless survey of the history of math. The structure of Infinite Jest resembles a fractal, he once remarked.[c]Politics. He spent a week traveling with McCain's primary campaign during the 2000 U.S. Presidential Election.[d]9/11 Attacks.[e]Religion. Allegedly he went to church every day, but he didn't talk about it much.[f] Maybe he would've writtten some anonymous answers.Mental Illness. DFW's own demons are well-documented.Addictions. He hung out in halfway houses for sport.[g] Addiction is a recurring theme in Infinite Jest.[h]Teaching. He had some curious classroom practices: Cheap, mass-market paperbacks were the only required reading in his English 102 class at Pomona College. The likes of Jackie Collins and Stephen King, he argued, would be "harder than more conventionally 'literary' works to unpack and read critically."[i]Tennis. He played the sport since childhood[j] and covered matches on assignment for the likes of Esquire.[k]Dogs (pets).[l] He had two rescues, and there was talk of him opening a shelter.Popular Culture.[m]Success.[n]Dating and Relationships. He died a married man but before that had a reputation as a ladykiller: "He once complained to Franzen that his destiny in life seemed to be to 'put my penis in as many vaginas as possible.'"[o]Dentistry. The first draft of Infinite Jest had a ton of tooth trivia that had to be extracted. DFW's editor "pointed out that the stereo chemistry of the bicuspid root was probably not of compelling interest to most readers."[p]Poetry, heh.[q][5] From DFW's "Tense Present," Harper's Magazine (2001):The term I was raised with is SNOOT. I submit that we SNOOTs are just about the last remaining kind of truly elitist nerd. SNOOTs' attitudes about contemporary usage resemble religious/political conservatives' attitudes about contemporary culture: We combine a missionary zeal and a near-neural faith in our beliefs' importance with a curmudgeonly hell-in-a-handbasket despair at the way English is routinely manhandled and corrupted by supposedly educated people. The Evil is all around us: boners and clunkers and solecistic howlers and bursts of voguish linguistic methane that make any SNOOT's cheek twitch and forehead darken. A fellow SNOOT I know likes to say that listening to most people's English feels like watching somebody use a Stradivarius to pound nails. We are the Few, the Proud, the Appalled at Everyone Else. [a]full text: http://instruct.westvalley.edu/lafave/DFW_present_tense.htmlimage via http://www.utexas.edu/opa/blogs/culturalcompass/2011/04/26/in-the-galleries-david-foster-wallaces-affinity-for-grammar-and-usage/[6] Former student Sue Dickman, from a collection of DFW tributes on McSweeney's:I used to confuse 'further' and 'farther,' and, apparently, I did it quite often. In one of my stories, I’d confused them yet again, and in the margins, he’d written, simply, 'I hate you.' I’ve never confused them since. He once left me a note, postponing a meeting, excusing himself by saying, 'I’m so hungry I’m going to fall over.' While I was irritated that he wasn’t there, I immediately adopted that sentence and have been saying it ever since.more tributes: http://www.mcsweeneys.net/pages/memories-of-david-foster-wallace[7] From DFW's "The Planet Trillaphon as It Stands in Relation to the Bad Thing," The Amherst Review (1984):To me it's like being completely, totally, utterly sick. I will try to explain what I mean. Imagine feeling really sick to your stomach. Almost everyone has felt really sick to his or her stomach, so everyone knows what it's like: it's less than fun. OK. OK. But that feeling is localized: it's more or less just your stomach. Imagine your whole body being sick like that: your feet, the big muscles in your legs, your collarbone, your head, your hair, everything, all just as sick as a fluey stomach. Then, If you can imagine that, please imagine it even more spread out and total. Imagine that every cell in your body, every single cell in your body is as sick as that nauseated stomach. Not just your own cells, even, but the e.coli and lactobacilli in you, too — the mitochondria, basal bodies, all sick and boiling and hot like maggots in your neck, your brain, all over, everywhere, in everything. All just sick as hell. Now imagine that every single atom in every single cell in your body is sick like that — sick, intolerably sick. And every proton and neutron in every atom, swollen and throbbing, off-color, sick, with just no chance of throwing up to relieve the feeling. Every electron is sick, here, twirling off balance and all erratic in these funhouse orbitals that are just thick and swirling with mottled yellow and purple poison gases, everything off balance and woozy. Quarks and neutrinos out of their minds and bouncing sick all over the place, bouncing like crazy. Just imagine that, a sickness spread utterly through every bit of you, even the bits of the bits. So that your very ... very essence is characterized by nothing other than the feature of sickness; you and the sickness are, as they say, "one."That's kind of what the Bad Thing is like at its roots. Everything in you is sick and grotesque. And since your only acquaintance with the whole world is through parts of you — like your sense organs and your mind, etc. — and since these parts are sick as hell, the whole world as you perceive it and know it and are in it comes at you through this filter of bad sickness and becomes bad. As everything becomes bad in you, all the good goes out of the world like air out of a big broken balloon. There's nothing in this world you know but horrible rotten smells, sad and grotesque and lurid pastel sights, raucous or deadly-sad sounds. Intolerable open-ended situations lined on a continuum with just no end at all.full text: https://docs.google.com/viewer?a=v&q=cache:YvYoUJ2gLTEJ:quomodocumque.files.wordpress.com/2008/09/wallace-amherst_review-the_planet.pdf+&hl=en&gl=us&pid=bl&srcid=ADGEESiH3jJXi7bjSM_2L8PhRZTooLn3UiLfReikgADFO2Z6h4AUMZqGQRgYKlN4E65dja7bD8qCeHaMf8RBrx5thDHTO5lJdSZlOYszttp49YEdb6QqMpWpgBhA5KfSKnQdlD6MWRgv&sig=AHIEtbRyCdoSidF85-Zvw5HlehAtX6XcVw[8] From Infinite Jest (1996 book):The so-called ‘psychotically depressed’ person who tries to kill herself doesn’t do so out of quote ‘hopelessness’ or any abstract conviction that life’s assets and debits do not square. And surely not because death seems suddenly appealing. The person in whom Its invisible agony reaches a certain unendurable level will kill herself the same way a trapped person will eventually jump from the window of a burning high-rise. Make no mistake about people who leap from burning windows. Their terror of falling from a great height is still just as great as it would be for you or me standing speculatively at the same window just checking out the view; i.e. the fear of falling remains a constant. The variable here is the other terror, the fire’s flames: when the flames get close enough, falling to death becomes the slightly less terrible of two terrors. It’s not desiring the fall; it’s terror of the flames. And yet nobody down on the sidewalk, looking up and yelling ‘Don’t!’ and ‘Hang on!’, can understand the jump. Not really. You’d have to have personally been trapped and felt flames to really understand a terror way beyond falling.[9] source: http://www.guardian.co.uk/books/2010/feb/20/ten-rules-for-writing-fiction-part-one_________[a] I'm sure I missed some. Chime in: If you could ask DFW one question, what would it be?[b] source: "Philosophical Sweep: To understand the fiction of David Foster Wallace, it helps to have a little Wittgenstein." http://www.slate.com/articles/arts/culturebox/2010/12/philosophical_sweep.html[c] DFW in an interview with "Bookworm" host Michael Silverblatt, 1996:It's actually structured like something called a Sierpinski Gasket, which is a very primitive kind of pyramidical fractal, although what was structured as a Sierpinski Gasket was the first — was the draft that I delivered to Michael in '94, and it went through some I think 'mercy cuts', so it's probably kind of a lopsided Sierpinski Gasket now.[d] From DFW's "The Weasel, Twelve Monkeys and The Shrub," Rolling Stone (magazine) (2000):There's another thing John McCain always says. He makes sure he concludes every speech and [town hall meeting] with it, so the buses' press hear it about 100 times this week. He always pauses a second for effect and then says: "I'm going to tell you something. I may have said some things here today that maybe you don't agree with, and I might have said some things you hopefully agree with. But I will always. Tell you. The truth." This is McCain's closer, his last big reverb on the six-string as it were. And the frenzied standing-O it always gets from his audience is something to see. But you have to wonder. Why do these crowds from Detroit to Charleston cheer so wildly at a simple promise not to lie?[e] From DFW's "9/11: The View From the Midwest," Rolling Stone (2001):Everybody has flags out. Homes, businesses. It's odd: You never see anybody putting out a flag, but by Wednesday morning there they all are. Big flags, small flags, regular flag-size flags. A lot of home-owners here have those special angled flag-holders by their front door, the kind whose brace takes four Phillips screws. And thousands of those little hand-held flags-on-a-stick you normally see at parades – some yards have dozens all over as if they'd somehow sprouted overnight. Rural-road people attach the little flags to their mailboxes out by the street. Some cars have them wedged in their grille or duct-taped to the antenna. Some upscale people have actual poles; their flags are at half-mast. More than a few large homes around Franklin Park or out on the east side even have enormous multistory flags hanging gonfalon-style down over their facades. It's a total mystery where people get flags this big or how they got them up there. ... The Yellow Pages have nothing under Flag.[f] source: http://www.mbird.com/2012/08/david-foster-wallace-went-to-church-constantly/image via http://www.amazon.com/Fate-Time-Language-Essay-Free/dp/0231151578[g] DFW in a conversation with Valerie Stivers (c. 1996):I would go to halfway houses and just sit there. I lurked a lot. Nice thing about halfway houses is they are real run-down and real sloppy and you can just sit around. And the more you sit around looking uncomfortable and out of place, the more it looks like you belong there. Some of the people knew this [breaking and entering] stuff very well and they loved to talk about it. And nobody is as talkative as a drug addict who just had his drugs taken away. They are eager to tell you their life [stories].full conversation: http://www.stim.com/Stim-x/0596May/Verbal/dfwread.html[h] One of the best parts of Infinite Jest is a list of "exotic new facts" picked up in these halfway houses:That everybody is identical in their secret unspoken belief that deep down they are different from everyone else. That this isn’t necessarily perverse.That different people have radically different ideas of basic personal hygiene.That most Substance-addicted people are also addicted to thinking, meaning they have a compulsive and unhealthy relationship with their own thinking.That certain persons will simply not like you no matter what you do.That no matter how smart you thought you were, you are actually way less smart than that.That pretty much everybody masturbates.That it is statistically easier for low-IQ people to kick an addiction than it is for high-IQ people.That boring activities become, perversely, much less boring if you concentrate intently on them.[i] DFW's syllabus: http://www.hrc.utexas.edu/press/releases/2010/dfw/teaching/#syllabus[j] How good was David Foster Wallace (author) at tennis?[k] From DFW's "Federer as Religious Experience," The New York Times, 2006:Beauty is not the goal of competitive sports, but high-level sports are a prime venue for the expression of human beauty. The relation is roughly that of courage to war.The human beauty we’re talking about here is beauty of a particular type; it might be called kinetic beauty. Its power and appeal are universal. It has nothing to do with sex or cultural norms. What it seems to have to do with, really, is human beings’ reconciliation with the fact of having a body.Of course, in men’s sports no one ever talks about beauty or grace or the body. Men may profess their “love” of sports, but that love must always be cast and enacted in the symbology of war: elimination vs. advance, hierarchy of rank and standing, obsessive statistics, technical analysis, tribal and/or nationalist fervor, uniforms, mass noise, banners, chest-thumping, face-painting, etc. For reasons that are not well understood, war’s codes are safer for most of us than love’s.full text: http://www.nytimes.com/2006/08/20/sports/playmagazine/20federer.htmlFrom DFW's "The String Theory," Esquire, 1996:Television tends to level everybody out and make everyone seem kind of blandly good-looking, but at Montreal it turns out that a lot of the pros and stars are interesting- or even downright funny-looking. Jim Courier, former number one but now waning and seeded tenth here 43, looks like Howdy Doody in a hat on TV but here turns out to be a very big boy — the “Guide Média” lists him at 175 pounds, but he’s way more than that, with big smooth muscles and the gait and expression of a Mafia enforcer. Michael Chang, twenty-three and number five in the world, sort of looks like two different people stitched crudely together: a normal upper body perched atop hugely muscular and totally hairless legs. He has a mushroom-shaped head, inky-black hair, and an expression of deep and intractable unhappiness, as unhappy a face as I’ve seen outside a graduate creative-writing program. Pete Sampras (tennis player) is mostly teeth and eyebrows in person and has unbelievably hairy legs and forearms — hair in the sort of abundance that allows me confidently to bet that he has hair on his back and is thus at least not 100 percent blessed and graced by the universe. Goran Ivanisevic is large and tan and surprisingly good-looking, at least for a Croat; I always imagine Croats looking ravaged and emaciated, like somebody out of a Munch lithograph — except for an incongruous and wholly absurd bowl haircut that makes him look like somebody in a Beatles tribute band. It’s Ivanisevic who will beat Joyce in three sets in the main draw’s second round. Czech former top-ten Petr Korda is another classic-looking mismatch: At six three and 160, he has the body of an upright greyhound and the face of — eerily, uncannily — a freshly hatched chicken (plus soulless eyes that reflect no light and seem to see only in the way that fishes’ and birds’ eyes see).full text: http://www.esquire.com/features/sports/the-string-theory-0796#ixzz28a80GWUk[l] DFW in an interview with The Believer, 2003:I’m still not sure I’ve got much to relate. I know I never work in whatever gets called an office, e.g., school office I use only for meeting students and storing books I know I’m not going to read anytime soon. I know I used to work mostly in restaurants, which chewing tobacco rendered impractical in ways that are not hard to imagine. Then for a while I worked mostly in libraries. (By “working” I mean doing the first few drafts and revisions, which I do longhand. I’ve always typed at home, and I don’t consider typing working, really.) Anyway, but then I started to have dogs. If you live by yourself and have dogs, things get strange. I know I’m not the only person who projects skewed parental neuroses onto his pets or companion-animals or whatever. But I have it pretty bad; it’s a source of some amusement to friends. First, I began to get this strong feeling that it was traumatic for them to be left alone more than a couple hours. This is not quite as psycho as it may seem, because most of the dogs I’ve ended up with have had shall we say hard puppyhoods, including one past owner who went to jail… but that’s neither here nor there. The point is that I got reluctant to leave them alone for very long, and then after a while I got so I actually needed one or more dogs around in order to be comfortable enough to feel like working. And all that put a crimp in outside-the-home writing, a change that in retrospect was not all that good for me because (a) I have agoraphobic tendencies anyway, and (b) home is obviously full of all kinds of distractions that library carrels aren’t. The point being that I mostly work at home now, although I know I’d work better, faster, more concentratedly if I went someplace else. If work is going shitty, I try to make sure that at least a couple hours in the morning are carved out for this disciplined thing called Work. If it’s going well, I often work in the p.m. too, although of course if it’s going well it doesn’t feel disciplined or like uppercase Work because it’s what I want to be doing anyway. What often happens is that when work goes well all my routines and disciplines go out the window simply because I don’t need them, and then when it starts not going well I flounder around trying to reconstruct disciplines I can enforce and habits I can stick to. Which is part of what I meant by saying that my way of doing it seems chaotic, at least compared to the writing processes of other people I know about (which now includes you).[I]full interview: http://www.believermag.com/issues/200311/?read=interview_wallace[m] From an interview with DFW biographer D. T. Max in the Christian Science Monitor (2012):Not only was he was acquainted with pop culture, he thought he was addicted to it. He said his primary addiction wasn't to marijuana or alcohol, it was to television. David, with his anxiety and depression, found television fundamentally soothing: He was addicted to narrative. He said the narratives were too easy, too smooth, the endings too pat. But his sister says in the book that she didn't know anyone who had a need for TV like David had.full interview: http://www.csmonitor.com/Books/chapter-and-verse/2012/0927/Biographer-D.T.-Max-getting-inside-David-Foster-Wallace-s-head[n] From DFW's commencement address at Kenyon College (2005):In the day-to-day trenches of adult life, there is actually no such thing as atheism. There is no such thing as not worshipping. Everybody worships. The only choice we get is what to worship. ...Worship power, you will end up feeling weak and afraid, and you will need ever more power over others to numb you to your own fear. Worship your intellect, being seen as smart, you will end up feeling stupid, a fraud, always on the verge of being found out. But the insidious thing about these forms of worship is not that they're evil or sinful, it's that they're unconscious. They are default settings.They're the kind of worship you just gradually slip into, day after day, getting more and more selective about what you see and how you measure value without ever being fully aware that that's what you're doing.And the so-called real world will not discourage you from operating on your default settings, because the so-called real world of men and money and power hums merrily along in a pool of fear and anger and frustration and craving and worship of self. Our own present culture has harnessed these forces in ways that have yielded extraordinary wealth and comfort and personal freedom. The freedom all to be lords of our tiny skull-sized kingdoms, alone at the centre of all creation. This kind of freedom has much to recommend it. But of course there are all different kinds of freedom, and the kind that is most precious you will not hear much talk about much in the great outside world of wanting and achieving.... The really important kind of freedom involves attention and awareness and discipline, and being able truly to care about other people and to sacrifice for them over and over in myriad petty, unsexy ways every day.That is real freedom. That is being educated, and understanding how to think. The alternative is unconsciousness, the default setting, the rat race, the constant gnawing sense of having had, and lost, some infinite thing.full transcript: http://moreintelligentlife.com/story/david-foster-wallace-in-his-own-words[o] source: "Every Love Story is a Ghost Story: A Life of David Foster Wallace by DT Max." http://www.amazon.com/Every-Love-Story-Is-Ghost/dp/0670025925[p] Another piece from his 1996 conversation with Valerie Stivers: http://www.stim.com/Stim-x/0596May/Verbal/dfwread.html[q]An early poem: My mother works so hard so hard and for bread. She needs some lard. She bakes the bread. And makes the bed. And when she’s threw she feels she’s dayd.from a permanent collection of DFW's books and papers, housed at the University of Texas’ Harry Ransom Center, scanned with permission by a blogger: http://www.writebynight.net/writings-from-a-past-life/wfpl-david-foster-wallace/[I] Yeah, I realize this is only tangentially related to dogs. If you're reading this deeply, though, I suspect you don't mind.

Should we get rid of the letter C?

YesThe.Whole.Bloody.Grapheme.System.Needs.A.Bloody.ReformIn Hindi too,“box” is baksa,we did not make a whole new alphabet for it.Why did the Romans do it,I ain’t got an explanation.Okay John Katt,I’m copying pasting your most famous answer here.the English spelling system can and should be reformed because it is flawed (half its lexicon is) and what is flawed is very costly and/or takes/wastes a lot more time/money, which takes a lot of time and effort, away from other more crucial subjects. Who pays? Children, parents, teachers, and society do. We all do. You do. To preserve the history of words? 250 years of insanity must stop. Oil lamps had their time. Paradigms have shifted in 250 years. A lingua franca must be better. Is it the spelling system that is rotten or the learners? The teachers? Hundreds of thousands of words in the English lexicon ARE badly spelled. It is not the learners’ fault or the teachers’ fault. It is the system’s faultS.Words hurt and in more ways than … one! Where is the /w/ sound in one? Why an “o” in “words? Hundreds of thousands of English words are misspelled like so in the lexicon! Is the history of words more important than the health of people?A UNIQUE, FAIR, WISE, and EFFICIENT REFORMLearning from research and past reforms and using new technology, this reform de-risks everything, maximizes the opportunity and minimizes ALL issues. First, current users will NOT need to learn a new spelling system because it will be phased-in for 12 years in schools, starting at age 6. (This is unlike many other spelling reforms. That is a game-changer.) Second, it uses a set of diaphonemes (the most popular phonemes of most English dialects) to prevent all polemics. (No one has ever thought about this before.) As a result, it allows for the adoption of an extremely robust, systematic, and phonemic system that will be the envy of all languages. It is a unique, fair, wise, and efficient proposal. MANY paradigms have shifted in 250 years. The English spelling system can and should be re-engineered. English will be finally a worthy lingua franca.MAIN SINE QUA NON CONDITIONS for a reformThat no current users be required to learn the new spellings. It will not be necessary.That the new system be introduced in whole to new students in level 1 of primary classes called cohort 1 (C1) and phased-in, one year at a time.That C1 students and all future cohorts be given bilingual, bicodal courses in the old system. They become a transitional cohort.That other (English 1.0) students get some instruction with the new system, but increasingly so for cohorts that are closer to C1 .That diaphonemes (average of phonemic variations of main dialects) be used. It is an algorithm. It cannot be fairer.That an extremely systematic and phonemic scheme with virtually no exceptions be used. No compromise.That local, dialectal spoken/speech be maintained. C1 will be bicodal.That computer technology be used to instantly transcode material. Writers will see their work published in two varieties or as desired by the customer using a free transcoding program.That no loss of jobs take place. Translators/interpreters will still be needed for the current population and C1.This is the short version. Please do read the full version below. It not only gives the reasons why this should happen, but explains how it should happen (and why this way) in detail. All reasons I know of to not do it are debunked. I have been studying this issue for 5 years. I have heard them all.Why and how?The “how” part offers credible solutions on how to reduce dramatically all of the incredible high costs and/or problems children. parents, and children must face and all of the arguments used by all the skeptics or vested parties to prevent a reform. ALL! For one thing, paradigms have shifted since Samuel Johnson (250 years ago) “dictated” how he thought we (kids and adults) should decode/read, spell/pronounce words. Since Johnson’ time and since some of these arguments were made, a lot has changed. Odd! For another, some reformists expected that current populations change how they spell words and learn how to read in a new way. This was a major error. This issue demanded a new approach that sought outside-of-the-box solutions. There is nothing more idiotic and insane to keep doing something that has failed over and over again and expect different outcomes. Times have changed. Lastly, English has become a lingua franca, spoken by 1.5 billion of people, in some ways or another. This is the short answer. LOL(I would like to recognize and thank Dana Smith and Roman Huczok for their excellent comments and suggestions, which helped make this answer better.)WHY?Consider that as the most used language in the world (spoken by billions of people), it has a spelling system that is an absolute embarrassment for a language and, a fortiori, for a lingua franca, a world language, a language that should be the epitome of all languages. It isn’t. It has hundreds of thousands of words misspelled in its lexicon (see below for evidence), which might explain why you must use a spell checker or can’t pronounce or could not decode words when you learned it, if you learned it, depending on your language (see the end for explanation). With the rapid use of computers and spell checkers, it is my contention that the issue is no more a spelling issue, as it is a decoding/reading and pronouncing issue. This does not take a way from the idea at all though because the spelling issue was never considered by many to be the major issue.Did you know that it takes 3 more years for the average learner to learn just a few thousands words whereas any Finnish or Spanish kid after Grade 1 can decode ANY word in their language’s lexicon in just a few months? (Maybe I am exaggerating a bit, but the differences are startling.) Think teaching the extra time is free? Sorry taxpayers! Think it is easy to learn? Kids labelled reading disabled thank you and so 1/2 of Britons who apparently cannot spell. Illiteracy rates are unusually high in Commonwealth countries (unless you throw lots of money at mitigating the problem like in Canada). But, we are told that it would be impossible to fix (Did they not say this for climate change or women equality,…), but many professors of linguistics like Dr. Yule and Dr. Betts now and Shaw, Roosevelt, Carnegie, Websters, Twain,… think that when there is a will there is a way. Are it tutoring agencies, publishes of English learning material, even teachers, psycholinguists who are claiming it cannot be done? The status quo is lucrative. Who is right? Here is the case.Exhibit A:This is how it could look like.Exhibit B:Now compare that with something that is familiar.Exhibit C:Exhibit D:Which is a mess to deal with? And which is an intelligent, engineered system?ANSWERd bExhibit E:10.1.1.92.1830.pdfBut, why it should be done and how could it be done?1. WHY it should be done?Since Finnish kids start school at age 7 and most English-speaking kids at age 5.5. Given that Finnish is a highly phonemic spelling system, it would allow Commonwealth countries to reduce educational budgets by removing the need for many teachers and many literacy teachers as well. This could be billions in the US. Alternatively, countries could replace these years by the teaching of more crucial matters and include more subject areas in the curriculum. Ethics 101? Financial literacy 102? Programming 103?Billions of foreign learners will finally be able to pronounce words as they are pronounced.There will be less of a chance for Chinese to dethrone English as a lingua franca if English were easier to learn.A country with a better educated population with higher literacy levels might have lower crime rates, but higher employment.More happier children will want to read.Some linguists claim that it is not really worth it. But, their jobs depends largely on this mess (especially psycholinguists). Tutors/tutoring agencies depend on the mess too. Many benefit from the chaos. They get grant money to study our kids who become their little rats. Are they really disabled or is it the system that is? Which is it?Exhibit F:(http://www.elemedu.upatras.gr/en...)So, is it worth it when most kids in English countries can decode words at a 35% level and most others do TWICE as well? This study was done with normal kids. This was a major study too. Are English-speaking kids inherently dumber than all the other kids? Are English-speaking teachers inherently more incompetent than all others? Or is it that the spelling system is … inherently dumber, disabled, disabling,… that if it were a car, it would be recalled in a second? You bet.Exhibit G:If we use Masha Bell’s research on just 7000 words and extrapolate this to the whole English lexicon, it is safe to say that there are hundreds of thousands of words that are “misspelled”. /ə/ AKA a schwa is one of the most frequent phonemes/sounds in English and it can be spelled 13 ways. How do you expect people to learn this? Memorize the pronunciation or the spelling of 1 million words? Most polysyllabic words (mostly not part of the 7000 words) have at least one schwa and often many.Exhibit H:Here is that diagram again.It shows that there are 13 ways of spelling ONE phoneme (the schwa), 12 ways to spell the /ei/ sound or 11 ways the /ɛ/ sound. Hey! I see a pattern! THAT is easy to remember! (Find the list of spellings in the addendum.) Hundred of thousands of misspelled words! NOT! How insane is that? No wonder the native speakers cannot spell/decode words and foreign learners cannot pronounce words (and spell them, if they do not speak a European language). As to teachers, they do the best they can with a messy tool. That it costs a lot of money for taxpayers and parents does not seem to be an issue, but many people don’t know how expensive it is.You want more evidence?Exhibit I:http://files.eric.ed.gov/fulltex...(This booklet focuses on the reading literacy test scores of students in the grade levels where most 9- and 14-year-olds were to te found in 32 national systems of education. Data were collected from 9,073 schools, 10,518 teachers, and 210,059 students. In 1990-1991 thirty-two school systems were involved in the LEA Reading Literacy Study. Participating in the study were: Belgium (French) Netherlands Greece New Zealand Spain s Sweden Nigeria , Norvay Switzerland Thailand Demnark Iceland Philippines Trinidad & Tobago Finland Indonesia Portugal United States Ireland Singapore Venezuela)There is more. From Wikipedia. Exhibit J:“Alphabetic writing systems vary significantly in the depth of their orthography. Englishand French are considered deep orthographies in comparison to Spanish and Italian which are shallow orthographies. A deep orthography like English has letters or letter combinations that do not reliably map to specific phonemes/sound units, and so are ambiguous in terms of the sounds that they represent whereas a transparent or shallow orthography has symbols that (more) uniquely map to sounds, ideally in a one-to-one correspondence or at least with limited or clearly signified (as with accent marks or other distinguishing features) variation. Literacy studies have shown that even for children without reading difficulties like dyslexia, a more transparent orthography is learned more quickly and more easily; this is true across language systems (syllabic, alphabetic, and logographic), and between shallow and deep alphabetic languages. [20] WikipediaThis person’s explains it well:Exhibit K:From The children of the code website.Exhibit LSorry, Chomsky: English spelling is hardly “close to optimal”I rest my case.2) How?Many languages have had reforms (check it out) and many were successful. The better ones did not expect current learners to learn the new system. Moreover, today, we have computers, smart phones, AI,… the paradigm has shifted. Are the naysayers living in the Dark Ages? Why are they so reticent? No one who loves to read this will be bothered. Why would they want to subject kids to mental torture because the MAJORITY of kids struggle, the MAJORITY of citizens struggle, the MAJORITY of foreigners struggle. While the naysayers love exceptions, the evidence is there. No matter how you look at it, it is a mess. (This neurological research might explain why some people are reluctant. It is not so much the people as it is how some brain works! To be sure, people whose identity is shaped because they know English and can earn a living might have an even stronger reaction, but I will prove that there is nothing to be feared about. Change can be a win-win proposition if done with respect and intelligence.)But, how could we do it? There have been a few attempts to try to reform the English spelling system. The most serious one came to an abrupt end 100 years ago. President Roosevelt had initiated. His friend, Carnegie, who had been delegated the task of looking into it, believed that we should not force people to spell and read differently as his board had decided. He preferred a more informal and timid reform where people could decide to adopt the changes or not. (Simplified Spelling Board - Wikipedia) Purists, vested interest groups (publishers), and political games did the rest. (German and French reforms of recent times have suffered similar fates. By trying to appeal to the general population, by making minimal changes, they opened themselves to criticisms for being superfluous or a lot of trouble for nothing.) The main issue, however, is the idea of forcing people to adopt the changes or hoping that people will do. All of this would make a holder of an MBA laugh as both approaches are ineffective. In the case of English, there might be a cultural aspect to this as well. The British culture is know for their ability to manage adversity by keeping a stiff upper lip and avoiding reality by going to bars, as this question demonstrated: What are some dark sides of the UK? Not sure if having to learn to spell and read such a complex system is one of the reasons for these types of behaviours, but it might make them less interested in changing matters.They are proud of their culture and their language, like most people are. That is to be admired, of course. But, this is not about culture. It is not about language per se. It is about a spelling system. Times have changed. Times are changing. There is Quora and we can discuss things. Today, we also know more about management of changes. People are texting now. Computers were introduced 30 years ago and everyone seems to have a smartphone in their hands now. One billion Chinese are soon to be a force to be reckoned with. Which international language are they going to speak? Chinese? Many paradigms have shifted. Not sure what is going to be the tipping point. But, things have evolved.One of the ideas (which is closer to Carnegie's thinking) is that we should NOT try to “force” people who know the current system to learn the new one. However, we have pushed that idea to its extreme. It is our contention that no one, unless they wanted to, should learn to spell using the new system unless they desire it. It is our contention that the key to making a reform work should be to introduce it methodically and slowly in schools first and only in schools. That does not mean that we would introduce bits of the new spelling system to all grades. It will be ALL of the new spelling system starting with the Grade 1 kids, as a wave. Of course, this plan would need to be approved by the government and the people. There will be a congress next year during which a group of linguists and professors at the English Spelling Society will decide which is the system(s) that they recommend. I believe we should use a system that is based on a general dialect that has some, but not all the features of any of the dialects. I am talking about the vowel diaphonemes found on this page. I am including a reformatted sample to give an idea. The multiple dialects that the original chart has have been removed because the idea is that the diaphonemes will be used as the pronunciation guide to be used with whatever spelling system will be chosen. So, in Iezy Ignglish, in all Commonwealth countries “trap” words will now be pronounced with the /ae/ and spelled with an “a” as “trap” and “bath”/”father” words (if the /a:/ is used) will not be spelled baeth/faether (faedher)), and so on and so forth. Btw, this is one diaphoneme that would require an agreement on as it has 2 possible phonemic representations. All others are straightforward. The match with the new spelling system (whatever it is) will be 1:1.Beyond that, it is our view that a reform should take place in all schools once teachers have been trained. It should be starting with a group that has not learned to read and write: 6 year old kids. The rest of the school children would be taught the old system. It might be wise to start teaching these children bits of the new system. Again, the government will look at the recommendations and decide what they feel is best. The next year, the second cohort of new grade 1 kids would start school learning the new system while the older Grade 1 would move into Grade 2, continuing to learn to read and write using the new system. Tablets will be given/lend to all students (school and home) to access information from the internet or other sources, except that this information would be instantly transcoded when they need it, like it happens with Google translate. I think that by the time this happens, most tablets will be very inexpensive for schools so that all students will have one. (Dana Smith felt free tablets should be given to needy families, but I think these should be loaned like textbooks are/were.) They will be like those textbooks that were given to us at the beginning of the year. Btw, transcoding is much faster and more accurate than translating. Eventually, after a few years, some of these cohorts will be taught the basics of English 2.0: how to read street signs, store signs,… They would not learn how to spell using English 1.0, but they will learn to decode a basic set of words and, especially towards the end of their schooling, how to read English 1.0 words of their trade. There might be a need for them to have a slightly different accent depending on how standardized English 2.0 will be, which might depend on which countries decide to participate in the reform. This reform will take 12 years to works its way out, but it will take years to make it occur. Convincing the population will take years and then politicians more years. But, if and when it is approved, the 12 years will give even more time for society to get ready. Free transcoders (programs that can transcode between English 1.0 and English 2.0) will be available for all. This will be very simple to do. In fact, some reformists have made some of them. When these cohorts exit the school system, they will try to find work like all students or they will go to university. Books and manuals should be available in both codes. This should not be so hard for publishing companies and digital copies of these should available for download into tablets. We would hope that by that time students’ books would all be the digital type. Again, these are recommendations.Will a reform be perfect? Is anything perfect? Is the English spelling system now perfect? Why are those lovers of perfection in love with imperfection? There are thousands of imperfections now and they are fine with those? Be coherent! The system will no doubt be much easier to learn and to teach. That is self-evident, as demonstrated earlier. Something is simpler to learn than something that is complex.Will some dialects need to re-align some of the pronunciation of a few words? Probably unless they want to develop their own English 2.0 or unless their kids can be bilingual in both Singapore English which pronounces some words that have a short /a/ as /E/ and the new English 2.0 (an international version). I believe kids have that capacity. Up to them to pronounce it the way they want. We would suggest they teach in school the right way and enforce the change in the media. When there is 250 years of laissez-faire, it is necessary to take extraordinary measures, but it will be up to Singapore and other dialects to align themselves with the diaphonemic spelling that is as impartial as one can make it. Accents will be preserved in many instances though, but they will less … pronounced in some cases. But, bad should not be spelled “bed”. Multiple spellings of words could be allowed, but whenever it is feasible. IN some sense, a re-alignment will nt be bad for those “rogue” dialectal words and speakers. It will allow to be understood by more people in the Commonwealth. Is that bed? I mean “bad?Let me address the most common objections that are often used to prevent any change.1. There are too many accents (AKA dialectal variations in pronunciation).Dialectal accents are started to be “learned” or “perceived” by the age of 2, BEFORE children can link phonemes, allophones, with any spelling, phonemic or not. Here is the research.We know that children (their brain, really) have the capacity to learn many languages, many accents. In Italy, for instance, it is common to hear people know a dialect (usually oral) and speak/read/write the standard Italian as well. We suggest that the only reasonable way to deal with this issue is to make all Commonwealth children start to learn another standard dialect by Grade 1 which will —finally— be the lingua franca that all people around the world have long been awaiting for. (Thanks Roman Huczok for reminding me that keeping a dialect and learning a standard is very doable.)To avoid political issues and help make English a true lingua franca, it would be wise to use the diaphonemes used on the International Phonetic Alphabet chart for English dialects - Wikipedia or some other agreed form. If some populations of certain countries or region were not interested in this new standard, they would have the option of staying with the current system or reform their dialect as they please.This would not be the Armageddon, the end of English as we know it, an incredible loss of culture,… This is about spelling, not language.The internet, public education for all, social media,... are helping standardizing many accents and, if it were to be reformed in this manner, it will be much easier.2. I do not want to learn a new system.You won't have to. I repeat … you will not have to. That is our pledge. I do not want to either. This reform is not for me, you, but for the next generation.The change will occur in schools, starting with as many Grade 1 classes as it is possible. Opting out will be possible. In year 2, another group of Grade 1 will start to learn the new system. The first group will go in Grade 2 and will keep learning the new system (or rather learn using the new system since they will have it mastered decoding and spelling already).3. There will be a need for some people to learn the new system.The 20 to 40 will need to be familiar with the new system, but free programs will be able to transcode from the current system to the other and vice versa, seamlessly and fast. Transcoding is much faster than translating. It is also much more accurate.The cohort that will go into the labour force after 12 to 16 years will speak the same language. Speech recognition software and transcoding programs will do the rest.4. Street signs and vendor signs will need to be respelled/respelt.No. The new spellers will be able to decipher the old system.5. ALL documents will need to be reprinted.No. Digital documents will be transcodable. It is much easier to do so.Should a citizen be interested or be in need to read printed documents that are not in a digital format, I am sure we can figure out ways to efficiently recode these (text-to-speech recognition software to deal with that issue) or have someone read the text to him or her or transcode it.6. Will translators lose their job?No. A good segment of the population will still function in the current system.No. The new spellers will need translation as much as the older generation.There will be a need for some transcoding too.7. Will teachers lose their job?If a Grade 1 teacher were incapable or unwilling to teach the new system, they could be given the task to teach those children who are opting out or be asked to teach the old system (as a second language) or teach older grades. Substantial accommodations should be given to older teachers wanting to plan (prepare material) and/or learn the new code.There will be a 4 or 5 year preparatory period to start the transition (Year 1/Grade 1) which should give people plenty of time to shift, should they want to.Unions will be consulted and a system will be put in place to facilitate the transition for allRetirement by attrition would be one of the ways used to replace teachers.Grade 1 teachers are often able to teach other grades.New students will need a few teachers to teach the old system as a second-language mode.8. The language will lose the morphological links between words that will be lost or reduced with a new more phonemic system.Everyone knows the link between language and linguistics or photography and photographer, for instance. These pairs of words resemble each other, but the link is not automatic in the first pair. A more phonemic system will sometimes improve the semantic relation and sometimes obscure it. At the end of the day, some of the words that are linked by how they look, require the learner to remember the pronunciation of the words since they might not be pronounced as they are written and, obviously, their spelling: photographic, but photography: (/fəˈtɒɡ.rə.fi/ VS /ˌfəʊ.təˈɡræf.ɪk/. Which is better? In a reform spelling, these words would be spelled something like this in Iezy Inglgish: fetogrefy VS fotegrafic. Notice that in both, the stressed syllable is the one that does not have the “e” or schwa. Huge advantage for foreign learners where now no one knows where the word stress is put. Is there anyone who canNOT link the two words semantically? A newer system will improve the link between words that are spoken and words that are written/decoded/read. Learning should be faster as a result. The current system obscures the link between words that are spoken (and heard) and words that are written/decoded.Furthermore, yes, there are words that look like they are related and the link will be obscured, but if spelling and misspellings are so important aren’t they a lot of false-positives that a respelling would clarify? Is ready about reading? Arch and archive? Apathy is about taken a path? Ballet is a small ball? Country is about counting? Breakfast is the action of breaking fast? Colonel is a special colon? Lead (the metal) is about leading? Bus and business? Deteriorate and deter are related? (They are not.) Cancel is about cans and cells? Gig (the performance) and gigantic are related ? Have and haven are related? Ache and achieved? Reinvent and rein (vent)? All and allow? Inventory and invent are linked? Reached and ache? Resent is about sent/sending? Is a beldam a belle dame (It is an ugly woman)? Is noisome about noise (It is not!) Is nowhere about now and here? How many more do I need to prove the point that there are a lot of false positives currently?There are words in the current system that appear to be linked, but aren’t. No one seems to be confused. Invest is about a vest that’s in a coat? Numb and numbers are related? Legal is about leg? Assertive about ass? Acting and actual are related? Deli and deliver? Heaven and heavy? Man and many? Add and address? Earl and early? Pet and petty? How about bigot? Is Nonplussed about being not plus (It means confused!)? Is Disabuse about abuse (it is not!)? Is specious about species or special? Crudités are crude? Terrific and terrible mean the complete opposite. How about restive and restful? How about condone and condemn? Disinterested does not mean not being interested! Is humility and humiliation connected semantically? NOPE! Is sublime less than good like subway, substitute, subtract? Prodigy and prodigal are not related? Awesome and aweful are opposite. Is being “broke” about being broken? Dulcet is about being sweet, not dull. Is ribald about badness? It doesn’t! There are lots of false positives in that sense in the lexicon too.9. Is it worth it?Suppose we make English as regular as Finnish. Now consider, Finnish kids start school at age 7. Most English-speaking kids start at school at age 5.5. How much does it cost to teach all of those kids for an extra 1.5 years. Teachers are expensive. Daycare? Less so. Imagine the possibilities. Also, there is quite a bit of data that indicates that maybe kids do not need to go to go to school at age 5.5. Again, daycare or universal childcare could make the life of millions, dare I say billions of learners, that much better. THAT is not worth it? What is?Illiteracy rates in the 30% levels in most Commonwealth countries will drop with a simpler system.A simpler system will be MUCH cheaper to teach (fewer specialist teachers will be needed).Learning will happen faster. As students HEAR a new word, they will be able to link it to its ONE possible spelling and when they read a new word, they will be able to link it to a word that they heard. The brain connections will be reinforced more efficiently. Lets take a word that you have never seen printed before: “tuleafashouhe”? Are you sure of it pronunciation? Where is that word stress? And then, a few weeks later, you hear on TV “tlayfaychor”? Would you be able to connect the 2? Most likely not, but if it had been spelled as it is pronounced, then the connections would have been made, with more certainty. It is self-evident that more coherence between systems would make learning faster and easier.Fewer kids will be pulled out and shamed as reading disabled.Less crime as more people will be able to read and write. (Robots will do the menial work that illiterate people sometimes must do).Happier labour force.Better educated/literate labour force.Better economy.More people around the world should be able to learn an easier system.Easier travelling and understanding between people.More people will be able to read books written in the new code. Higher profits for English-speakers.10. Which industry will lose?Tutoring agencies and tutors could lose out. Still, we could make the first generation that will learn the new code, bicodal. If this is so, they will surely need help to learn the old code, just like pasts generatiosn did.We need to make this a win-win situation. Anyone displaced will be given a choice of work that is related to what they were doing beforeTeachers (attrition and re-assignment will need to be addressed), but those who cannot cope will be re-assigned.Publishing houses will benefit. Some of the old material will need to be digitized, but a lot has been (Gutenberg project, Google,…)Psychologists who assess students’ reading and writing abilities/intelligence will lose out, but I suspect that this is a small number, seeing how many of these evaluations took place in my 25 years of teaching.11. What do these new spelling systems look like?Some are using most of the spelling rules that exist now. They are just regularizing many of the patterns. (Masha Bell has one system.)A reform would not mean spelling using a phonetic system like IPA. There is no cursive writing (although this could be created I suppose). Cursive writing is faster than printing words, but aren’t more and more people going to use technology to avoid writing all together? Even in rare instances where people are asked to cursive write, a recorder with speech recognition software could do the work of transcribing much more efficiently than any one could, even with short-hand.Other systems attempt to maximize the opportunity as taking a second shot at this will prove unlikely. Iezy Ignglish is such a system. It systematizes the easiest pattern of English: the vowel+e pattern found in many words (piece, clue, foe, reggae,…) and it echoes the long vowel+Consonant+e pattern found in a lot more words, which is more contrived than the first pattern and which makes decoding a much harder tasks than it should (late, cute, core, mite, mere). The simpler pattern would do away with the cumbersome doubling of the consonant rule to change the vowel value: pat/patting, mat/matting VS mate/mating/.Others can be found on the English spelling society website.12. Will communication between the ones who know the new system and the ones that don’t be affected?The language/speech/conversations will be the same.The only communication mode that will be affected is the written mode, but is there anyone who thinks that most people will not have smart phones or tablets or computers to allow this?The internet will need transcoding work, but programs can easily be created I am told by programmers. These programs will be able to transcode tons of material and will do it faster that any translation program (and much better).13. There will be many homophones/heterographs. Will they not make communication harder? (Thanks to Tomas Murphy for that one.)You/ewe, read/reed, ad/add,…No one when speaking and listening is confused. There are very few instances where this is a problem in real life. Many cannot be confused as many are not even the same type of words: check (verb)/cheque (noun), ad/add, it’s/its, their/there/they’re,… Let’s respell them. tchèc, ad, its, dhèr: I went to cash the tchèc. I tchèc the tchèc.Homonyms: bark (dog) and bark (tree). Many words have multiple meaning and one spelling. Is there anyone who is confused.The great majority of these words are disambiguated by the types of words they are. You is a pronoun and ewe is a noun. Read is a verb and reed is a noun. Ad is a noun and add is a verb. The linguistic context helps clarify matters already.The great majority of these words are disambiguated by the context.Hundreds of thousands of misspelling are okay, but 500 homophones will cause issues?There are just 500 homophones and there is close to 1/2 a million words misspelled. We cannot change the spelling of words because 500 words (which upon analysis would not cause comprehension issues if they were respelled.14. Accents will vanish?For the last 250 years (and more) they have NOT vanished even with an extremely POOR system representing them.15. Language changesThe printing press, spell checking, public schooling,.. have cemented the spelling of words for centuries now and will. If one were to re-adjust spelling to be more phonemic/regular, spell checking and printing will operate in the same manner, cementing the new spelling of words. I doubt that vowel shifts will occur again.With a more phonemic system, there will be fewer chances of deviations. Deviations occurred mainly because few people were schooled, could write, could read in the past. The system will be much more stable.16. The current spelling helps me.If you are French, Spanish, Italian (any Romance language), about 50% of words will be spelled more or less like they are in these languages and reading will be easy for these words, obviously. However, the spelling will interfere with the pronunciation of these words. Everybody knows how these speakers have a real tough time pronouncing English words. Why? Because they are not pronounced like they are in these languages. Why? Because there is very little guidance in the spelling to indicate how these words are pronounced. Again, there are hundreds of thousands of words whose pronunciation needs to be memorized.If you are Greek, there are only 6% of the time where spelling will be easier for you, Pronunciation again is going to be a huge problem for hundreds of thousands of words as it with everyone.People who speaks Germanic languages are being helped with the spelling 1/4 of the times. Unless they know a Romance language, they will struggle with the pronunciation of those French words and others. The pronunciation of those Germanic-based words will be easier. They are usually shorter and do not contain the invisible “schwa” that one needs to know about for those bigger Latin/French words.But, what about those people who don;t speak any of these languages? Are Asian-speakers persona non grata? How selfish is it for European-speakers to stick it to the Asians, Arabic, African-speakers? Will they forgive you or will they ask you to learn Chinese or Arabic? What goes around, comes around. Latin used to be the lingua franca of the world and how many people speak Latin now.ADDENDUM:The schwa: about, children, pencil, renovate, supply, syringe, luscious, mission, blood, does, cousin, thorough, and especially. (Even “one” or o_e could be included as it is pronounced “wun”.) So, even 14! (I am including stressed syllables/words as schwas like in does, which some phoneticians feel are very different than the “a” in “about”. This linguistics university course 115 (Pennsylvania) seems to indicate I am right: Phonetic symbols and this linguist in speech makes a case for the schwa being just a reduced phoneme of the /ʌ/)./ei/: great, raid, grey, gray, ballet, mate, table, caffe, matine, reggae, vein, vain/ɛ/: bear, care, aerial, their, there, questionnaire, mayor, bury, any, friend, leopard/i:/: be, been, bean, key, mere, elite, people, ski, debris, quayNOTE:Differences in English acquisitionDifferences in English acquisition depends on the language you learned, you know. If you are a person who speaks a Germanic language, it will be easy for you to speak, read/read out loud, and understand English, day to day English. If you speak a Romance language like French, Italian, Spanish,… it will be relatively easy for you to read/decode/understand written English (especially if it is academic or legal), but it is the speaking and the listening parts that will be challenging, especially if it is a day to day task. English is a mix of both these families, but easier words are related to Germanic languages like Dutch, Danish,… , but more difficult words are usually French, Latin-based Italian, Spanish, … Of course, if you are a speaker of Chinese, of a language that is not related to English in any way, then it is going to be an even greater challenge, as even phonemes and of course alphabetic letters will be completely new to you.Read John Katt's answer to What are your most controversial or unpopular opinions?My blog Njú Ingliş has this article and it’s a good new grapheme system.Here are all the letters we’d use.The original letter is followed by the one with a diacritica=/ə/ againá=/aː/ fatherà=/ʌ/cutå=/ɒ/bottleæ=/æ/batb=/b/butterc=/t͡ʃ/chopperd=/d/dogð=/ð/thise=/ɛ/beté=/eɪ/Bravef=/f/phoneg=/g/godĝ=/d͡ʒ/gelh=/h/halli=/ɪ/bití=/iː/beetj=/j/youk=/k/cutl=/l/lochm=/m/mann=/n/noo=/ɔ/boyó=/oʊ/ohp=/p/Potter*q=/q/Qatarr=/ɹ/rivers=/s/saltş=/ʃ/sheltert=/t/tallþ=/θ/thornu=/ʊ/ruthlessú=/u/bootv=/v/valvew=/w/will*x=/x/loch(Scottish English)*y=/y/few(only in some accents)z=/z/zedž=/ʒ/vision*Q,Y and X are to be used only for transliterating other scripts.Só,Ái ges it wiɫ ték às sàm táim tú get júzd tú ðis þing,bàt its fòr ár ón gud!I’m also making a Greek grapheme system which will look sexier.Áim olsó méking a Grík grafím sistam wic wil luk seksiar.

People Trust Us

CocoDoc has all of the features I need. The ability to combine documents, fill forms and add signatures.

Justin Miller