The Guide of finalizing Q-Line -- Credit Application Online
If you are looking about Modify and create a Q-Line -- Credit Application, here are the easy guide you need to follow:
- Hit the "Get Form" Button on this page.
- Wait in a petient way for the upload of your Q-Line -- Credit Application.
- You can erase, text, sign or highlight through your choice.
- Click "Download" to save the materials.
A Revolutionary Tool to Edit and Create Q-Line -- Credit Application


How to Easily Edit Q-Line -- Credit Application Online
CocoDoc has made it easier for people to Fill their important documents by online browser. They can easily Modify through their choices. To know the process of editing PDF document or application across the online platform, you need to follow these simple steps:
- Open CocoDoc's website on their device's browser.
- Hit "Edit PDF Online" button and Upload the PDF file from the device without even logging in through an account.
- Edit your PDF for free by using this toolbar.
- Once done, they can save the document from the platform.
Once the document is edited using online browser, the user can export the form through your choice. CocoDoc ensures that you are provided with the best environment for implementing the PDF documents.
How to Edit and Download Q-Line -- Credit Application on Windows
Windows users are very common throughout the world. They have met thousands of applications that have offered them services in editing PDF documents. However, they have always missed an important feature within these applications. CocoDoc intends to offer Windows users the ultimate experience of editing their documents across their online interface.
The steps of modifying a PDF document with CocoDoc is simple. You need to follow these steps.
- Pick and Install CocoDoc from your Windows Store.
- Open the software to Select the PDF file from your Windows device and continue editing the document.
- Fill the PDF file with the appropriate toolkit provided at CocoDoc.
- Over completion, Hit "Download" to conserve the changes.
A Guide of Editing Q-Line -- Credit Application on Mac
CocoDoc has brought an impressive solution for people who own a Mac. It has allowed them to have their documents edited quickly. Mac users can fill PDF forms with the help of the online platform provided by CocoDoc.
To understand the process of editing a form with CocoDoc, you should look across the steps presented as follows:
- Install CocoDoc on you Mac in the beginning.
- Once the tool is opened, the user can upload their PDF file from the Mac quickly.
- Drag and Drop the file, or choose file by mouse-clicking "Choose File" button and start editing.
- save the file on your device.
Mac users can export their resulting files in various ways. They can either download it across their device, add it into cloud storage, and even share it with other personnel through email. They are provided with the opportunity of editting file through multiple methods without downloading any tool within their device.
A Guide of Editing Q-Line -- Credit Application on G Suite
Google Workplace is a powerful platform that has connected officials of a single workplace in a unique manner. While allowing users to share file across the platform, they are interconnected in covering all major tasks that can be carried out within a physical workplace.
follow the steps to eidt Q-Line -- Credit Application on G Suite
- move toward Google Workspace Marketplace and Install CocoDoc add-on.
- Attach the file and Push "Open with" in Google Drive.
- Moving forward to edit the document with the CocoDoc present in the PDF editing window.
- When the file is edited ultimately, download and save it through the platform.
PDF Editor FAQ
UPSC prelims 2020 had around 10 questions on agriculture which required BSC, MSc agriculture level knowledge. Do you think UPSC should separate IFS and IAS/IPS prelims? (General studies are not general anymore)
My View : General Studies are Still GeneralIngredients for Solving Agriculture questions asked in CSE 2020 were -Basic Understanding + Right Temperament + Simple option Elimination/selection strategyLet us Analyze the Questions asked, and how any general candidate can solve the questions and not only Bsc/Msc CandidateQUESTION ON SUGARCANE PLANTINGWAS THIS QUESTION TOO SPECIFIC ? NOCAN ONLY Bsc/Msc Ag Guy answer this Question ? NoHow ? Look for the Reason belowQ. With reference to the current trends in the cultivation of sugarcane in India, consider the following statements:1. A substantial saving in seed material is made when bud chip settlings’ are raised in a nursery and transplanted in the main field2. When direct planting of setts is done, the germination percentage is better with single-budded setts as compared to setts with many buds3. If bad weather conditions prevail when setts are directly planted, single budded setts have better survival as compared to large setts4. Sugarcane can be cultivated using settlings prepared from tissue cultureWhich of the statements given above is/are correct?a) 1 and 2 onlyb) 3 onlyc) 1 and 4 onlyd) 2, 3 and 4 onlyWhy this question any one can answer ?Reason 1 : This news was covered in January 2020 in leading News paper which every CSE Aspirant covers i.e. The Hindu [Link attached]Bud chip technology catching on among sugarcane farmersReason 2 : A person with Minimal Agriculture knowledge also, can easily figure out the answer by elimination/selection techniqueStatement 4 is “can be” statement which is mostly correct in UPSC, so answer can be option C or DNow statement 2 cannot be correct as with common sense any one can say that germination will always be better when you have multiple buds / seeds (If you sow 5 seeds and 25 seeds, in which case germination % will be more ? I think any one can tell) So statement 2 was wrongHence answer will be Option D (1 AND 4 ONLY) [Common sense and slight presence of mind could have led you to right answer even if you had not covered The Hindu January]Let us Pick one more question !QUESTION ON ECO-FRIENDLY TECHNIQUEWAS THIS QUESTION TOO SPECIFIC ? NOCAN ONLY Bsc/Msc Ag Guy answer this Question ? NoHow ? Look for the Reason belowIn the context of India, which of the following is/are considered to be practice(s) of eco-friendly agriculture?1. Crop diversification2. Legume intensification3. Tensiometer use4. Vertical farmingSelect the correct answer using the code given below:a) 1, 2 and 3 onlyb) 3 onlyc) 4 onlyd) 1, 2, 3 and 4Reason : Question asks “eco-friendly” now a primary school student also knows what eco friendly means, Every one knows that “Legume” plants that is pulse plants increase nitrogen content of soil, hence you will be using less fertilizer so it will be a eco-friendly practice, similarly Crop diversification is very common terminology which we come across in multiple articles in news paper. Vertical farming is also in news nowdays where you gain maximum in minimum area. IF I DO NOT KNOW WHAT TENSIOMETRE IS ? THAT IS NOT A PROBLEM FOR ME, As answer could have been easily reached with options.So statements were very general, and any one can conclude the right answer not only a Bsc/Msc Agri guy is not required to answer this question.Next QuestionQUESTION ON FERTIGATIONWAS THIS QUESTION TOO SPECIFIC ? NOCAN ONLY Bsc/Msc Ag Guy answer this Question ? NoHow ? Look for the Reason belowQ. What are the advantages of fertigation in agriculture?1. Controlling the alkalinity of irrigation water is possible2. Efficient application of Rock Phosphate and all other phosphatic fertilizers is possible.3. Increased availability of nutrients to plants is possible4. Reduction in the leaching of chemical nutrients is possibleSelect the correct answer using the code given below:a) 1, 2 and 3 onlyb) 1, 2 and 4 onlyc) 1, 3 and 4 onlyd) 2, 3 and 4 onlyREASON: Simple option elimination technique will give you the right answer, Observe statement no. 2 “ all other phosphatic fertilizers” Extreme statements are wrong, this eliminates statement no. 2 Now you are left with only one option that is C, hence answer would be C. Just have right temperament and presence of mind you will get the answerI don’t know what FERTIGATION is?I don’t Know what Rock Phosphate is?but I can get right answer easily with simple common sense, A general student can answer the question not only Bsc or Msc Guy.Next QuestionQUESTION ON KISAN CREDIT CARDWAS THIS QUESTION TOO SPECIFIC ? NOCAN ONLY Bsc/Msc Ag Guy answer this Question ? NoHow ? Look for the Reason belowJust basic understanding can yield right answer for this, How? Look for the reason belowQ. Under the Kisan Credit Card scheme, short-term credit support is given to farmers for which of the following purposes?1. Working capital for maintenance of farm assets2. Purchase of combine tractors and mini trucks3. Consumption requirements of farm households4. Post-harvest expenses5. Construction of family house and setting up of village cold storage facilitySelect the correct answer using the code given below:a. 1, 2 and 5 onlyb. 1, 3 and 4 onlyc. 2, 3, 4 and 5 onlyd. 1, 2, 3, 4 and 5Reason: Now KCC is always in news, so every one knows that credit through KCC is for short term credit only so statement 2 cannot be correct as it talks about machinery which are assets hence comes under long term loans, so except option B all are having 2 so we can easily deduce right answer with common basic knowledge.Do Bsc/Msc Agri guy only can answer this question? Again Answer is Big NONext QuestionQUESTION ON CROPSWAS THIS QUESTION TOO SPECIFIC ? NOCAN ONLY Bsc/Msc Ag Guy answer this Question ? NoHow ? Look for the Reason belowQ."The crop is subtropical in nature. A hard frost is injurious to it. It requires at least 210 frost-free days and 50 to 100 centimeters of rainfall for its growth. A light well-drained soil capable of retaining moisture is ideally suited for the cultivation of the crop.” Which one of the following is that crop?a) Cottonb) Jutec) Sugarcaned) TeaReason: Look for the key words, FROST, SOIL, RAINFALL. Now everyone knows Sugarcane and Jute are water consuming crops as they are often in news as they require large amount of water, sugarcane is also called water guzzler crop as mentioned in economic survey. So 50–100cm rainfall would not be sufficient hence b and c cannot be answer. Tea is also cultivated in areas like Assam, WB where rainfall is optimum and normal, so with simple logic of RAINFALL we can deduce the right answer that is COTTON. Not even a Bsc/Msc guy can remember such parameters and they will also work on logic to answer this question.Hence again a specialized knowledge is not required to answer this question reading the question carefully applying simple logic on rainfall right answer could be selected.Similarly if you go through rest of the questions, UPSC has set the options in such a way that with basic knowledge, right temperament and option elimination/selection technique you can easily opt for right answer.Suggestion: Prepare as if you were preparing just keep your eyes and ears open, try to relate the things with your knowledge don’t get distracted that questions are too specific keep it simple and amplify the basic understanding of each topic in syllabus, you will feel much comfortable in solving those questions, UPSC is a professional body it knows what it is doing, and options are set on those lines which a candidate from any stream can select if his/her Fundamentals are clear.Take Away : Be clear with the Fundamentals, amplify it, keep your Eyes and ears open!Other Authentic Answers / Explanation can be fond at given below Quora answershttps://optimizeias.com/how-i-predicted-upsc-cse-question/https://optimizeias.com/upsc-prelims-2020-solved-paper-set-d/
After WWII, were there a lot of former pilots who never flew again in civilian life? It seems like that would be very depressing, going from such glory to something like a milk truck driver. How many left-over pilots were there?
When I dropped out of Texas A&M to enlist I promised my mother that I’d return to college and finish. She wanted to make sure knowing that I was an Army Brat for 18 years and loved the Army. She sent in an application to the University of Texas on my behalf. She mailed the letter of acceptance. When my year was up I was offered a $40,000 reenlistment bonus for six years and a promotion, the bonus was tax free, spend another year in the Nam. I showed the letter to the CSM of 5th Group and he told me to get the F*&% out of his office and I’d better graduate.I was 22 when I started over as a freshman. I transferred 26 hours from my three semesters at Texas A&M; I only needed one more semester of English. I graduated with honors May 5, 1974. My undergrade degree was Marketing and Advertising which landed me in various Advertising Agencies. It’s a strange occupation because you never stay vary long at any Ad Agency. Once you win Addies both gold and silver other agencies will start calling. You have to invest for your retirement. It was a wild ride, ‘Mad Men’ was tame compared with reality.I took a leap of fath by starting my own agency. I was searching for new accounts. I landed 14 what we call Mom & Pop businesses, the money wasn’t as much as I had hoped for but it was my company and it was enough for money. Then on a tip I called Liberty Homes, a residential builder with nine subdivisions. On average they sold 14 homes a month. By landing the account it turned out to be a $45,000 every month. The agency makes their income by increasing every ad, outdoor sign, printing by 17.5% the companies will give the agency two bills one for the client and another to the agency with the 17.5% subtracted. When I received a check I kept the 17.5% and paid the supplier the difference. A few months later I landed a business developer ‘Morley Properties’ they built office buildings up and down the East Coast. Once we started they were billed $65,000 a month.I was excellent coming up with ad campaigns which brought a lot of clients to both companies. I had no problem running the creative side but lost when it came to running the business. I found Don Blue with whom I served with in the Nam on A-404. Don hands a BA and knew the ins and outs of running the business side of the agency. I have a problem with trusting anyone with the 1.2 million a year that the agency brought in. I trusted Don with my life so I had no problem trusting him with the accounting side of the company. One day Don and I were in our conference room discussing the future; he was filling out a deposit slip and with a huge smile he slid the slip across the table. The deposit amount was $98,000 we were on our way.My success was based on what I learned in the Special Forces Training Group (now called the Q Course) the backward planning chart and other techniques. Don is 6’7”, black and in college he was the blocking back for Gale Sayers at Kansas. I found a silent partner that wanted to give us the money tyo operate without having to worry about a slow cash flow. Withy his help we secured a $100,000 line of credit and all he wanted was the tax form every year for his taxes.Then in 1988 the bottom of my world dropped to the bottom. The Savings & Loan banks started collapsing across the country. The SASA (San Antonio Savings and Loan) was next to fall which killed my agency. The bank called his note due ($40,000,000) this caused the CEO of the residential building company to shut down and the commercial developer left Texas. I gathered my eight employees in the conference room to tell them that I had to shut down the agency. I paid a visit to the management company running the office building. I informed them what happened and I needed to move into a smaller space. They told me that I signed a five year lease and they wouldn’t work with me. Fearing I would jump the lease they changed the locks on the three doors that led to our offices. I called a moving company and asked if they could show up at 8 am and have me moved out before the people from the management company showed up at 12 noon on Saturday. All three locks were changed but I was trained to overcome. I went into the Men’s Room, pushed a ceiling tile to one side and climber into the crawl space until I was over my offices. I dropped down into the space and taped all of the locks to stay open. I spent the night on my couch. At 8 am a large van showed up and they had everything out by 11 am. I found a building that belonged to the Arch Docese of San Antonio. They had built a new office complex and were renting office space at $0.35 a sq foot. At the original I was paying $1.50 a sq foot. About five months later the female from the old office management team showed up at my office door. She stepped in and was wowed by the opulence and told me that her company was going to sue me; OK nice seeing you again. Never heard another word. Several printers, Real Estate magazine outdoor board company all went out of business. Due to the crash.Trying to start over again I called on Frontier Enterprises and after I made my pitch and showed them my portfolio they offered me the position of their Marking Director. I asked about the salary and the president along with the HR slid a contract towards me. I skimmed over the contract, medical, a retirement 401K, life insurance and a nice budget to work with. The salary $105,000 a year. I told them that I’d like a day or two to think about the offer. When I got into my car I was like thank you Lord; you kept me alive in the Nam and here you are again saving my ass. Oh, hell yea I took the position. The CEO in a meeting that their 36 Coffee Shops across Texas were losing customers when the Obits came out. I need you toi come up with a plan to entice a younger crowd to eat at Jim’s.I went to the numbers guy and asked how low could we sell a burger, fries and a coke and still make money. He came back with $2.78; McDonalds charged $3.68. So I rented every bull board that stood by every McDonalds in San Antonio with a simple message, Burger, Fries and a coke and a huge $2.78. Three of the coffee shops had grease fires, every Jim’s was packed. The district manager for McDonalds called me and told me that I had to take down the outdoor boards. I informed him that we signed a six month contract and they’ll be down in five months and hung up. At the next directors’ meeting the CEO came in and threw down a computer print out (the old fashion type with white and lite green lines and along each edge a series of holes. He shouted great news we are up $26,000 this last quarter. Each departyment head started saying that’s because of the food department, the management edepartment. The CEO held his hands up and said no, pointing at me he shouted our Marketing genius. This placed a huge target on my back. The company added a casino in Mississippi. Once a month I would fly to Biloxi and spend a few days to check up on the POP’s, set up photo shoots etc. i would stay in the Penthouse suite, paid for nothing, every evening I would visit the restaurant for dinner. They would bring out the only meal I ate on every visit. Deep fired butterfly shrimp, not just the 8 but a soup bowl filled to the brim.All of my success I owe a deep grade tide to the training I received fro my time in Special Forces to always think outside the box and nothing is impossible, just bust your butt and never give up and of course the Lord.
Which are the top 20 papers related to AI (both machine learning & symbolic), so that I can cover the basics and choose a niche for my research?
Ah, the sort of challenging question that I like to ponder about on an otherwise lazy Saturday morning in the San Francisco Bay Area! I began my career in AI as a young Master’s student in Indian Institute of Technology, Kanpur, actually enrolled as a EE major, but got enchanted by Hofstadter’s Godel, Escher and Bach book into studying AI. That was 1982, so I’ve been working in AI and ML for the past 36 years. Along the way, I’ve read, oh, easily about 10,000 papers or so, give or take a few hundred. So, among these thousands of papers, now, I have to pick the “top 20 papers”, so that you, the interested Quora reader, can get a glimpse of what attracts someone like me to give up everything in pursuit of this possibly idealized quest to make machines as smart as humans and other animals. Now, there’s a challenge I can't resist.OK, any list like this is going to be 1) hopelessly biased by my personal choices 2) not entirely representative of modern AI. 3) a VERY long read! Remember that a lot of us who got into AI in the late 1970s or early 80s did so far before there was any commercial hope that AI would pay off. We were drawn to the scientific quest underlying AI: how to build a theory that explain how the brain works, how the mind was the result of the brain etc. None of us had any clue, it is safe to say, that in the early 21st century, AI would become a hugely profitable venture.But, I’m going to argue that now more than ever, it is vitally important for those entering the AI field to understand 1) where did the ideas for AI come from 2) that insights into the brain come from many fields, from neuroscience and biology to psychology and economics and from mathematics as well, so my choice of papers reflects that, and I’ve chosen papers from multiple academic fields. I’ve also not shied away from papers that are critical of things that you might believe in deeply (e.g., the power of statistical machine learning to solve potentially any AI problem).I’ll try to pepper the list with my chatty commentary as well, so it’s not going to be one of those all too boring “here’s 20 things you should know about blah”, which is all too often what you see on the web. But, with my commentary, this is going to be a really long reply. What I want to give you a glimpse of is the panoply of fascinating characters who made up this interdisciplinary quest to understand brains from a scientific and computational point of view, how diverse their backgrounds were, and what an amazingly accomplished set of minds they were. It is to their credit that AI has come along as quickly as it has, barely 60 years since it began. Without such a dazzling collection of minds working on the problem, we would have probably taken much longer to make any real progress.The list is somewhat historical and arranged chronologically as far as it is possible. I’ve also tried to keep in mind that the point of this list is that it should be comprehensible to a newbie entering the field of AI, so much as I like to include some heavy hitting math papers (of which I’ve selected a couple), I’ve included a few very sophisticated highly technical papers, since you need to get a sense of what AI is in the 21st century. So, the readability of this top 20 list varies widely: some papers are easy to get through in a Sunday afternoon. Others — well, let’s say that you’ll need several weeks of concentrated reading to make headway, assuming you have the math background. But, there are not many of the latter, so don't worry about not having the right background (yet).Let’s begin, as they say, at the beginning…..A logical calculus of the ideas immanent in nervous activity, by Warren McCullough and Walter Pitts (Univ. of Chicago), vol. 5, pp. 115–133, 1943. (http://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf). This is the first great paper of modern computational neuroscience, written by two brilliant researchers, one senior and distinguished (McCullough), the other, Pitts, a dazzling prodigy who had had no education of any sort, but talked his way into a position with McCullough. Pitts grew up in inner city Detroit, and because he was mercilessly beaten up by gang members who were older than him, he took refuge in the Detroit Public Library. It is rumored he devoured all 1000+ pages of Bertrand Russell and Alfred North Whitehead’s Principia Mathematica in one marathon reading session of several days and nights. This is not easy reading — it is a dense logical summary of much of modern math. Pitts was brave enough, even through he was barely in high school and had had no education, to audaciously write to Bertrand Russell in England, then a famous literary figure would go on to win the Nobel Prize in literature, as well as a great mathematician, pointing out a few errors and typos in the magnum opus. This young boy so impressed Russell that later he wrote him a glowing recommendation to work with Warren McCullough. Thus was born a great collaboration, and both moved shortly to MIT, where they came under the influence of none other than Norbert Wiener, wunderkind mathematician who invented the term “cybernetics” (the study of AI in man and machines). McCullough was a larger than life character who worked all night, and seemingly subsisted on diet of “Irish whiskey and ice cream”. Pitts wrote a dazzlingly beautiful PhD thesis on “three-dimensional neural nets”, and then, as a tragic Italian opera would have it, everything fell apart. Wiener and McCullough had a falling out (so petty was the reason that I will not repeat it here), and thus, McCullough stopped being actively working with Pitt, and Pitt sort of just faded away, but sadly, not before burning the only copy of his unpublished PhD dissertation before he defended it. No copy has yet been found of this work. Read the tragic story here — warning: keep a box of Kleenex handy for at the end, you will cry! — The Man Who Tried to Redeem the World with Logic - Issue 21: Information - Nautilus (read also the classic paper “What the Frog’s Eye tells the Frog’s Brain” https://hearingbrain.org/docs/letvin_ieee_1959.pdf by the same duo. A great modern paper is the recent breakthrough in biology at Caltech where the facial code used in primate brains to identify faces has finally been cracked — The Code for Facial Identity in the Primate Brain. — showing that in one narrow area, we may know what the human eye is telling the human brain, almost 60 years after McCullough and Pitts asked the question).Steps towards Artificial Intelligence, Marvin Minsky, Proceedings of the IRE, January 1960 (http://worrydream.com/refs/Minsky%20-%20Steps%20Toward%20Artificial%20Intelligence.pdf). Many date the beginning of AI formally to this article, which really outlined the division of AI into different subfields, many of which are still around, so this paper really can be said to have been the first to lay out the modern field of AI in its current guise. Minksy was a prodigy who did a PhD in math at Princeton (like many others in AI currently and in the past), and after a dazzling postdoc at Harvard as a Fellow (where he did early work in robotics), started the highly influential MIT AI Lab, which he presided over for a number of decades. He was a larger than life character, and those who knew him well had a large stock of stories about him. Among the best I’ve heard is one where he was interviewing a faculty candidate — a rather nervous young PhD who was excitedly explaining his work on the blackboard — when the student turned around, he discovered he was alone in his office. Minksy had disappeared during his explanation. The student was mortified, but Minksy later explained that what the student had told him sounded so interesting that Minksy had to step outside and take a walk to think the ideas over. Minsky was a polymath, at home in theoretical computer science where wrote some influential papers and a book, in psychology where he was an avid disciple of Freud and wrote a paper on AI and jokes and what it meant about the subconscious, in education where he pioneered new educational learning technology, and many other fields.Programs with Common Sense, John McCarthy, in Minsky, Semantic Information Processing, pp. 403–418, 1968. (http://www-formal.stanford.edu/jmc/mcc59.ps) McCarthy was the other principal founder of AI, who after a short period of working at MIT, left to found the Stanford AI Lab, which in due course proved to be just as influential as its East coast cousin. McCarthy above all was a strong believer in the power of knowledge, and in the need for formal representations of knowledge. In this influential paper, he articulates his ideas for a software system called “an Advice Taker”, which can be instructed to do a task using hints. The Advice Taker is also endowed with common sense, and can deduce obvious conclusions from the advice given to it. For example, a self-driving car can be given the official rules of the road, as well as some advice about how humans drive (such as “in general, humans do not follow the speed limit on most highways, but tend to drive 5–10 miles above the speed limit”). Critical to McCarthy’s conceptualization, it would not be sufficient to have a neural net learn the driving task. Knowledge had to represented explicitly so it could be reasoned about. He says something profound in the paper, which may shock most modern ML researchers. He says in page 4 in italics (emphasis) that “In order for a program to be capable of learning something it must first be capable of being told it”! By this definition, McCarthy would not view most deep learning systems as really doing “learning” (for none of the deep learning systems can be told what they learn). McCarthy was also famous for his work on lambda calculus, inventing the programming language LISP, using which much of AI research was then carried out. Most of my early research in AI was done using LISP, including my first (and most highly cited paper) work on using reinforcement learning to teach robots in the early 1990s at IBM.Why should Machines Learn, by Herbert Simon, in Michalski, Carbonell, and Mitchell (editors), Machine Learning, 1983 (http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33928). Herb Simon was a Nobel laureate in Economics, who spent his entire academic career at Carnegie Institute of Technology (later Carnegie Mellon University), doing much to build the luster and prestige of this now world-class university. He was one of the true polymaths, at home in half a dozen departments, from computer science to economics to business administration and psychology, in all of which he made foundational contributions. He was a gifted speaker, and I was particularly fortunate to be able to attend several presentations by Simon during the mid 1980s when I spent several years at CMU. In this article, Simon asks the question that very few AI researchers today bother asking: why should machines learn? According to Simon, why should machines, which can be programmed, bother with this slow and tedious form of knowledge acquisition, when something far quicker and more reliable is available. You’ll have to read the article to find his answers, but this article is valuable for giving perhaps the first scientific definition of the field of machine learning, a definition that is still valid today. Simon made many other contributions to AI, including his decades long collaboration with Allan Newell, another AI genius at CMU, whose singular ability in asking the right questions, made him a truly gifted researcher. It is rumored that computer chess came to life when Allan Newell mentioned casually in a conversation in the CMU CS common room about how the branching factor of chess is not all difficult to emulate in hardware, a comment that Hans Berliner followed up on in bringing the first modern chess player, Deep Thought, to fruition (the same CMU team went to IBM, built Deep Blue which of course beat Kasparov).Non-cooperative games, PhD thesis, John Nash, Princeton. (Non-Cooperative Games) John Nash came to Princeton as a 20 year old mathematician from Carnegie Institute of Technology in 1948 with a one line recommendation letter: “This man is a genius”. His PhD thesis would fully affirm his alma mater’s assessment of his capabilities. Nash took the work of von Neumann and Morgenstern’s zero-sum games into a whole new level with his dazzling generalization, which would earn him a Nobel prize decades later. Most of Nash’s history has been recounted in Sylvia Nasar’s wonderful biography A Beautiful Mind (later made into a movie starring Russell Crowe as John Nash). Legend had it that von Neumann himself did not think much of Nash’s work, calling it “another fixed point theorem”. Nash finished his ground breaking thesis in less than a year from start to finish. He arrived in Princeton in September 1948, and in November 1949, Solomon Lefschetz, a distinguished mathematician communicated the results of Nash’s thesis to the National Academy of Sciences meeting. Today, billions of dollars of product (from wireless cellular bandwidths to oil prospects) are traded using Nash’s ideas of game theory. The most influential model in deep learning today is the Generative Adversarial Network (GAN), and the key question being studied for GANs is whether and when do they converge to a Nash equilibrium. So, 70 years after Nash defended his short but Nobel prize winning thesis at Princeton, his work is still having a huge impact in ML and AI. Nash’s work also become a widely used framework to study evolutionary dynamics, giving rise to a new field called evolutionary game theory, pioneer by John Maynard Smith. Game theory is a crucial area for not only AI but also for CS. It has been said that the “Internet is just a game. We have to find what the equilibrium solution is”. Algorithmic game theory is a burgeoning area of research, studying things like “The Price of Anarchy”, or how solutions to hard optimization problems can be solved by letting millions of agents make locally selfish decisions. Nash’s PhD advisor at Princeton was Tucker, who Nash called “The Machine”. His second reader of his PhD thesis was Turkey, who can be called one of the fathers of modern machine learning, since he invented exploratory data analysis at Princeton (and later also invented the Fast Fourier Transform).Maximum likelihood from Incomplete Data using the EM Algorithm, Dempster, Laird, and Rubin (Journal of the Royal Statistical Society, Series B, 1977) (Maximum Likelihood from Incomplete Data via the EM Algorithm). In the mid 1980s, ML took a dramatic turn, along with AI, towards the widespread use of probabilistic and statistical methods. One of the most influential models of machine learning during the 1990s was based on Fisher’s notion of maximum likelihood estimation. Since most interesting probabilistic models in AI had latent (unobserved) variables, maximum likelihood could not be directly applied. The EM algorithm, popularized by three Harvard statisticians, came the rescue. It is probably the most widely used statistical method in ML in the past 25 years, and well worth knowing. This paper, which is cited over 50,000 times on Google Scholar, requires a certain level of mathematical sophistication, but it is representative of modern ML, and much of the edifice of modern ML is based on ideas like EM. A very simple way to think of EM is in terms of “data hallucination”. Let’s say you want to compute the mean of 20 numbers, but forgot to measure the last 5 numbers. Well, you could compute the mean over the 15 numbers only, or you could do something clever, namely put in an initial guess of the mean for each of the missing 5 numbers. This leads to an easy recurrence relation that lets you find the true mean. In the one dimensional case, this happens to be the same as ignoring the last 5 numbers, but in the two dimensional case, where one or the other dimension may be different, EM finds a different solution.A Theory of the Learnable, by Les Valiant, Communications of the ACM, 1984. (https://people.mpi-inf.mpg.de/~mehlhorn/SeminarEvolvability/ValiantLearnable.pdf). George Orwell wrote a brilliant novel about the rise of the all powerful all knowing Government, which spies on everyone. Well, in the same year of the novel, Les Valiant, a brilliant computer scientist at Harvard proved that Orwell’s fears could not be completely realized due to intrinsic limitations on what can be learned from data in polynomial time. That is, even if the Government could spy on individuals, it is possible to construct functions whose identity may be hidden because it would require intractable computation to discover them. Valiant’s work lead to his winning the Turing award several decades later, computer science’s version of the Nobel Prize. What Valiant did in this landmark paper was articulate a theory of machine learning that is analogous to complexity theory for computation. He defined PAC learning, or probably approximately correct learning, as a model of knowledge acquisition from data, and showed examples where a class of functions was PAC learnable, and also speculated about non-learnable functions. Valiant’s work in the past three decades has been hugely influential. For example, the most widely used ensemble method in ML is called boosting, and came out as a direct result of PAC learning. Also to be noted is that support vector machines or SVMs were justified using the tools of PAC learning. This is a short but beautifully written paper, and while it is not an easy read, your ability to understand and grasp this paper will make the difference between whether you are a ML scientist or an ML programmer (not to make any value judgements of either, the world needs plenty of both types of people!).Intelligence without representation, Rodney Brooks, IJCAI 1987 Computers and Thought Award lecture (http://www.fc.uaem.mx/~bruno/material/brooks_87_representation.pdf). Brooks based his ideas for building “behavior-based robots” on ethology, the study of insect behavior. What ethologists found was that ants, bees, and lots of other insects were incredibly sophisticated in their behaviors, building large complex societies (ant colonies, bee hives), and yet their decision making capacity seemed to be based on fairly simple rules. Brooks took this type of idea to heart, and launched a major critique at the then representation heavy apparatus of modern knowledge-based AI. He argued that robots built using knowledge-based AI would never function well enough in the real world to survive. A robot crossing the road that sees a truck and begins to reason about what it should do would get flattened by the truck before its reasoning engine came up with a decision. According to Brooks, this failure was due to a misunderstanding of how brains are designed to produce behavior. In animals, he argued, behaviors are hard wired in a layered highly modularized form, so that complexity emerges from the interleaving of many simple behaviors. One of his early PhD students, Jonathan Connell, showed that you can design a complex robot, called Herbert (after Herb Simon), that could do a complex task of searching an indoor building for soda cans and picking them and throwing them into trash, all the while having no explicit representation anywhere of the task. Later, after Jon Connell graduated, he came to work for IBM Research, where he and I collaborated on applying RL to teach behavior-based robots new behaviors. Brooks was a true pioneer of robotics, and inserted a real-world emphasis in his work that was till then sorely lacking. He had a common-sense wisdom about how to apply the right sort of engineering design to a problem, and was not enamored of using fancy math to solve problems that had far simpler solutions. Much of the success of modern autonomous driving systems owes something to Brooks’ ideas. It is possible that the tragic accident in Arizona involving an Uber vehicle might have been averted had that particular vehicle been outfitted with a behavior-based design (which countermands bad decisions, like the one the Uber vehicle allegedly made, of labeling he pedestrian as a false positive).Natural Gradient Works Efficiently in Learning, Amari, Neural Computation, 1989 (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.452.7280&rep=rep1&type=pdf). One of the living legends of statistics is the Indian scientist C.R. Rao, now in his 90s, who basically has done the most since Fisher in building up the edifice of modern statistics. C. R. Rao invented much of modern multivariate statistics as a young researcher at Univ. of Cambridge, England, due to his study of fossils of human bones from Ethiopia. In a classic paper written in his 20s, C. R. Rao showed that the space of probability distributions is curved, like Einstein’s space-time, and has a Riemannian inner product defined on the space of tangents at each point on its surface. He later showed how the Fisher information metric could be used to define this inner product. Amari, a brain science researcher in Japan, used this insight to define natural gradient methods, a widely used class of methods to train neural networks, where the direction pursued to modify the weights at any given point is not the Euclidean direction, but the direction that is based on analyzing the curved structure of the underlying probability manifold. Amari showed natural gradient often works better, and later wrote a highly sophisticated treatise on information geometry, expanding on his work on natural gradients. Many years later, in 2013, a group of PhD students and I showed that natural gradient methods could actually be viewed as special cases of a powerful class of dual space gradient methods called mirror descent, invented by Russian optimization researchers Nemirovksy and Yudin. Mirror descent has now become a basis for one of the most widely used gradient methods in deep learning called ADAGRAD by Duchi (now at Stanford), Hazan (now at Princeton) and Singer (now at Google). It is very important to understand these various formulations of gradient descent methods, which requires exploring some beautiful connections between geometry and statistics.Learning to Predict by the Methods of Temporal Differences, by Richard Sutton, Machine Learning journal, pp 9–44, 1988 (https://pdfs.semanticscholar.org/9c06/865e912788a6a51470724e087853d7269195.pdf). TD learning remains the most widely used reinforcement learning method, 34 years after they were invented by UMass PhD student Richard Sutton, working in collaboration with his former PhD advisor, Andrew Barto, both of whom can be said to have laid the foundations of the modern field of RL (on whose work the company Deep Mind was originally formed, and then acquired by Google). It is worth noting that Arthur Samuel in 1950s experimented with a simple form of TD learning, and used it to teach an IBM 701 to play checkers, which can be said to be the first implementation of both RL and ML in the modern era. But Rich Sutton brought TD -learning to life, and if you read the above paper, you’ll see the mathematical sophistication he brought to its study was far beyond Samuel. TD learning is now far beyond this paper, and if you want to see how mathematically sophisticated its modern variants are, I will point you to the following paper (which builds on the work of one of my former PhD students, Bo Liu, who brought the study of gradient TD methods to a new level with his work on dual space analysis). Janet Yu has written a very long (80+ pages) dense mathematical treatise on the modern version of gradient TD, which you have to be very strong in math to understand fully ([1712.09652] On Convergence of some Gradient-based Temporal-Differences Algorithms for Off-Policy Learning). TD remains one of the few ML methods for which there is some evidence that it is biologically plausible. The brain seems to encode TD error using dopamine neurotransmitters. The study of TD in the brain is a very active area of research (see http://www.gatsby.ucl.ac.uk/~dayan/papers/sdm97.pdf).Human learning in Atari, Tsividis et al., AAAI 2017 (http://gershmanlab.webfactional.com/pubs/Tsividis17.pdf). Deep reinforcement learning was popularized in a sensational paper in Nature (Human-level control through deep reinforcement learning) by a large group of Deep Mind researchers, and it is by now so well known and cited that I resisted the temptation to include it in my top 20 list (where most people would put it). It has led to large numbers of follow on papers, but many of these seem to miss the fairly obvious fact that there is a huge gulf between the speed at which humans learn Atari games and TD Q-learning with convolutional neural nets does so. This beautiful paper by cognitive scientists at MIT and Harvard shows that humans learn many of the Atari games in a matter of minutes in real time play, whereas deep RL methods require tens of millions of steps (which would be many months of human time, perhaps even years!). So, deep RL cannot be the ultimate solution to the Atari problem, even if it is currently perhaps the best we can do. There is a huge performance gap between humans and machines here, and if you are a young ML researcher, this is where I would go to make the next breakthrough. Humans seem to do much more than deep RL when learning to play Atari.Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Boyd et al., Foundations and Trends in Machine Learning, 2011 (Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, which has MATLAB code as well). The 21st century has arrived, and with it, the dawn of cloud computing, and machine learning is poised to exploit these large numbers of cloud based computational structures. This very long and beautifully written paper by Stanford optimization guru Stephen Boyd and colleagues shows how to design cloud based ML algorithms using a broad and powerful framework called Alternating Direction Method of Multipliers (or ADMMs). As the saying goes in the Wizard of Oz, “we are no longer in Kansas, Toto”. Namely, with this paper, we are now squarely in modern machine learning land, where the going gets tough (but, then as the saying goes, “the tough get going”). This is a mathematically deep and intense paper, of more than 100 pages, so it is not an easy read (unless, that is, your are someone like Walter Pitts!). But, the several weeks or months you spend reading it will greatly improve your ability to see how to exploit modern optimization knowledge to speed up many machine learning methods. What is provided here is a generic tool box, and you can design many specialized variants (including Hadoop based variants, as shown in the paper). To understand this paper, you need to understand duality theory, and Boyd himself has written a nice book on convex optimization to help you bridge that chasm. The paper is highly cited, for good reason, as it is a model of clarity.Learning Deep Architectures for AI, by Bengio, TR 1312, Univ. of Montreal (https://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf) (also a paper published in the journal Foundations and Trends in Machine Learning). Bengio has done more than almost anyone else in popularizing deep learning, and is also one of its primary originators and innovators. In this paper, he lays out a compelling vision for why AI and ML systems need to incorporate ideas from deep learning, and while many of the specifics he says have changed due to the rapid progress in deep learning in the last few years, this paper is a classic that bears well. This paper was written as counter point to the then popular approach of shallow architectures in machine learning, such as kernel methods. Bengio is giving another of his popular tutorials on deep learning at the forthcoming IJCAI conference in July in Sweden, in case you are interested in attending the conference or the tutorial. I don't have to say much more about deep learning, as it is the subject of a barrage of publicity these days. Suffice it to say that today AI is very much in the paradigm of deep learning (meaning a framework in which every problem is posed as a problem of deep learning, whether it is the right approach or not!). Time will tell how well deep learning survives in its current form. There are beginning to be worries about the robustness of deep learning solutions (the Imagenet architectures seem very vulnerable to random noise, which humans can’t even see, let along respond to), and the sample complexity seems formidable still. Scalability remains an open question, but deep learning has shown remarkable performance in many areas, including computer vision (if you download the latest version of MATLAB R2018a, you can run the demo image recognition program with a web cam with objects in your own house, and decide for yourself how well you think deep learning works in the real world).Theoretical Impediments to Machine Learning, with Seven Sparks from the Causal Revolution, by Judea Pearl, Arxiv 2018. ([1801.04016] Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution). Pearl is in my view the Isaac Newton of AI. He developed the broad probabilistic framework of graphical models, which dominated AI in the 1990s-2010s. He subsequently went into a different direction with his work on causal models, and now argues that probabilities are “an epiphenomenon” (or a surface property, of a much deeper causal truth). Pearl’s work on causal models has yet to gain the same traction in AI as his earlier work on graphical models (which is a major subfield in both AI and ML). Largely, the reasons have to do with the sort of applications that causal models fit well with. Pearl is focusing on domains like healthcare, education, climate change, societal models etc. where interventions are needed to change the status quo. In these hugely important practical applications, he argues that descriptive statistics is not the end goal, but causal models are. His 2009 2nd edition of Causality is still the most definitive modern treatment of the topic, and well worth acquiring.Prospect Theory: An Analysis of Decisions under Risk, by Daniel Kahneman and Amos Tversky, Econometrica, pp. 263–291, 1979. Daniel Kahneman received the Nobel prize in Economics for this work, with his collaborator Amos Tversky (who sadly died, and could not share in the prize). In this pathbreaking work, they asked themselves the simple question: how do humans make decisions under uncertainty? Do they follow the standard economic model of maximizing expected utility? If I gave you the choice between two outcomes: choose Door 1, and with 50% probability, you get no cash prize, or you get $300; alternatively if you choose Door 2, you get a guaranteed prize of $100. It perhaps won't surprise you that many humans choose Door 2, even through expected utility theory shows you should choose Door 1( since the expected utility is $150, much higher than Door 2). What’s going on? Well, humans tend to be risk averse. We would rather have the $100 for sure, than risk getting nothing with Door 1. This beautiful paper, which has been cited over 50,000 times, explores such questions in a number of beautiful simple experiments that been repeated all over the world with similar results. Well, here’s the rub. Much of the theory of modern probabilistic decision making and reinforcement learning in AI is based on maximizing expected utility (Markov decision processes, Q-learning, etc.). If KT is right, then much of modern AI is barking up the wrong tree! If you care about how humans actually make decisions, should you continue to chose an incorrect approach? Your choice. Read this paper and decide.Towards an Architecture for Never-ending Language Learning, Carlson et al., AAAI 2010. Humans learn over a period of decades, but most machine learning systems learn over a much shorter period of time, often just a single task. This CMU effort led by my former PhD advisor, Thomas Mitchell, explores how a machine learning system can learn over a very long period of time, by exploring the web, and learning millions of useful facts. You can interact with the actual NELL system online at Carnegie Mellon University. NELL is a fascinating example of how the tools of modern computer technology, namely the world wide web, makes it possible to design ML systems that can run forever. NELL could potentially live longer than any of us, and constantly acquire facts. One issue, at the heart of recent controversies, is “fake news”, of course. How does NELL know what it has learned is true? The web is full of fake assertions. NELL currently uses a human vetting approach of deciding which facts it learns are really to be trusted. Similar systems can be designed for image labeling, language interactions, and many others.Topology and Data, by Gunnar Carlson, Bulletin of the American Mathematical Society, April 2009 (http://www.ams.org/images/carlsson-notes.pdf). The question that many researchers are interested in knowing the answer to is: where is ML going in the next decade? This well known Stanford mathematician is arguing in favor of the use of more sophisticated methods from topology, a well developed area of math that studies the abstract properties of shape. Topology is what mathematicians use to decide that a coffee cup (with a handle) and a doughnut are essentially the same, since one can be smoothly deformed into the other without cutting. Topology has one great strength: it can be used to analyze data even when standard smoothness assumptions in ML are not possible to make. It goes without saying that the mathematical sophistication needed here is quite high, but Carlson refrains from getting very deep into the technical subject matter, giving for the most part, high level examples of what structure can be inferred using the tools of computational topology.2001: A Space Odyssey, book by Arthur C. Clarke, and movie by Stanley Kubrick. My next and last choice of reading — this has gone on long enough, and both you and I are getting a bit tired by now — is not an AI paper, but a movie and the associated book. The computer HAL in Kubrick’s movie 2001 is to my mind the best exemplar of an AI based intelligent system, one that is hopefully realizable soon. 2001 was released in 1968, exactly 50 years from now, and its 50th anniversary was marked recently. Many of my students and colleagues, I find, have not seen 2001. That is indeed a sacrilege. If you are all all interested in AI or ML, you owe it to yourself to see this movie, or read the book, and preferably do both. It is in my mind the most intelligent science fiction movie ever made, and it puts all later movies to shame (no, there is no silly laser sword fights or fake explosions or Darth Vaders here!). Instead, Stanley Kubrick designed the movie to be as realistic as the technology from 1960s would allow, and it is surprisingly modern even today. HAL is of course legendary from his voice (“I’m sorry Dave” is now available as a ring tone on many cellphones). But, HAL is also a great example of how modern AI will work with humans, and help assist many functions. Many long voyages into space, such as Mars or beyond, cannot be done with a HAL, as humans will have to sleep or be in hibernation to save on storage for food etc. There is a nice book by Stork on doing a scene by scene analysis of HAL in the movie, with where AI is in the 21st century. This book is also worth acquiring.OK, I’m ending this uber long reply two papers short of the required 20, but I’m sure I’ve given you plenty of material to read and digest. I did also cheat a bit here and there, and gave you multiple papers to read per bullet. Happy reading. Hope your journey into the fascinating world of AI is every bit as rewarding and fun as mine has been over the past 30+ years.
- Home >
- Catalog >
- Finance >
- Application Form >
- Credit Application Form >
- Credit Application For A Business Account >
- credit application terms and conditions template >
- Q-Line -- Credit Application