Ibm Powerpc Design In Bluespec: Fill & Download for Free

GET FORM

Download the form

How to Edit Your Ibm Powerpc Design In Bluespec Online With Efficiency

Follow the step-by-step guide to get your Ibm Powerpc Design In Bluespec edited in no time:

  • Hit the Get Form button on this page.
  • You will go to our PDF editor.
  • Make some changes to your document, like adding date, adding new images, and other tools in the top toolbar.
  • Hit the Download button and download your all-set document into you local computer.
Get Form

Download the form

We Are Proud of Letting You Edit Ibm Powerpc Design In Bluespec With a Streamlined Workflow

Take a Look At Our Best PDF Editor for Ibm Powerpc Design In Bluespec

Get Form

Download the form

How to Edit Your Ibm Powerpc Design In Bluespec Online

If you need to sign a document, you may need to add text, fill in the date, and do other editing. CocoDoc makes it very easy to edit your form in a few steps. Let's see the easy steps.

  • Hit the Get Form button on this page.
  • You will go to our online PDF editor web app.
  • When the editor appears, click the tool icon in the top toolbar to edit your form, like highlighting and erasing.
  • To add date, click the Date icon, hold and drag the generated date to the target place.
  • Change the default date by changing the default to another date in the box.
  • Click OK to save your edits and click the Download button when you finish editing.

How to Edit Text for Your Ibm Powerpc Design In Bluespec with Adobe DC on Windows

Adobe DC on Windows is a useful tool to edit your file on a PC. This is especially useful when you do the task about file edit on a computer. So, let'get started.

  • Click the Adobe DC app on Windows.
  • Find and click the Edit PDF tool.
  • Click the Select a File button and select a file from you computer.
  • Click a text box to adjust the text font, size, and other formats.
  • Select File > Save or File > Save As to confirm the edit to your Ibm Powerpc Design In Bluespec.

How to Edit Your Ibm Powerpc Design In Bluespec With Adobe Dc on Mac

  • Select a file on you computer and Open it with the Adobe DC for Mac.
  • Navigate to and click Edit PDF from the right position.
  • Edit your form as needed by selecting the tool from the top toolbar.
  • Click the Fill & Sign tool and select the Sign icon in the top toolbar to customize your signature in different ways.
  • Select File > Save to save the changed file.

How to Edit your Ibm Powerpc Design In Bluespec from G Suite with CocoDoc

Like using G Suite for your work to complete a form? You can integrate your PDF editing work in Google Drive with CocoDoc, so you can fill out your PDF in your familiar work platform.

  • Go to Google Workspace Marketplace, search and install CocoDoc for Google Drive add-on.
  • Go to the Drive, find and right click the form and select Open With.
  • Select the CocoDoc PDF option, and allow your Google account to integrate into CocoDoc in the popup windows.
  • Choose the PDF Editor option to open the CocoDoc PDF editor.
  • Click the tool in the top toolbar to edit your Ibm Powerpc Design In Bluespec on the applicable location, like signing and adding text.
  • Click the Download button to save your form.

PDF Editor FAQ

What is RISC-V?

RISC-V (pronounced "risk-five") is an open-source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles.In contrast to most ISAs, the RISC-V ISA is free and open-source and can be used royalty-free for any purpose, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open architecture[1]ISA, it is significant because it is designed to be useful in a wide range of devices. The instruction set also has a substantial body of supporting software, which avoids a usual weakness of new instruction sets.The project began in 2010 at the University of California, Berkeley, but many contributors are volunteers and industry workers outside the university.[2]The RISC-V ISA has been designed with small, fast, and low-power real-world implementations in mind,[3][4]but without over-architecting for a particular microarchitecture style.[4][5][6][7]As of May 2017, version 2.2 of the userspace ISA is fixed and the privileged ISA is available as draft version 1.10.[4]Contents1Significance2History2.1Predecessors2.2Foundation2.3Awards3Motivating requirements4Software5Adopters5.1Commercial5.2In development6Design6.1ISA base and extensions6.2Register sets6.3Memory access6.4Immediates6.5Subroutine calls, jumps, and branches6.6Arithmetic and logic sets6.7Atomic memory operations6.8Compressed subset6.9Embedded subset6.10Privileged instruction set6.11Bit manipulation6.12Packed SIMD6.13Vector set6.14External debug system7See also8References9Further reading10External linksSignificance[edit]The RISC-V authors aim to provide several CPU designs freely available under a BSD license. Such licenses allow derivative works, such as RISC-V chip designs, to be either open and free, like RISC-V itself, or closed and proprietary.By contrast, commercial chip vendors such as ARM Holdings and MIPS Technologies charge substantial license fees for the use of their patents.[8][9]They also require non-disclosure agreements before releasing documents that describe their designs' advantages and instruction set. The secrecy prevents security auditing.[citation needed]Developing a CPU requires design expertise in several specialties: electronic logic, compilers, and operating systems. It is rare to find this outside of a professional engineering team. The result is that modern, high-quality general-purpose computer instruction sets have not recently been widely available anywhere, or even explained, except in academic settings. Because of this, many RISC-V contributors see it as a unified community effort. This need for a large base of contributors is part of the reason why RISC-V was engineered to fit so many uses.The RISC-V authors also have substantial research and user-experience validating their designs in silicon and simulation. The RISC-V ISA is a direct development from a series of academic computer-design projects. It was originated in part to aid such projects.[4][10]History[edit]Predecessors[edit]The term "RISC" dates from about 1980.[11]Before this, there was some knowledge that simpler computers could be effective, but the design principles were not widely described. Simple, effective computers have always been of academic interest.Academics created the RISC instruction set DLX for the first edition of Computer Architecture: A Quantitative Approach in 1990. David Patterson was an author, and later assisted RISC-V. However DLX was for educational use. Academics and hobbyists implemented it using field-programmable gate arrays, but it was not a commercial success.ARM CPUs, version 2 and earlier, had a public-domain instruction set, and it is still supported by GCC, a popular free-software compiler. Three open-source cores exist for this ISA, but they have not been manufactured.[12][13]OpenRISC is an open-source ISA based on DLX, with associated RISC designs. It is fully supported with GCC and Linux implementations. However, it has few commercial implementations.Foundation[edit]Krste Asanović at University of California, Berkeley, found many uses for an open-source computer system. In 2010 he decided to develop and publish one in a "short, three-month project over the summer". The plan was to help both academic and industrial users.[10]David Patterson at Berkeley also aided the effort. Patterson originally identified the properties of Berkeley RISC,[11]and RISC-V is one of his long series of cooperative RISC research projects. Early funding was from DARPA.[4]A partial list of organizations that support the RISC-V Foundation includes: AMD,[14]Andes Technology,[15]BAE Systems, Berkeley Architecture Research, Bluespec, Inc., Cortus, Google, GreenWaves Technologies, Hewlett Packard Enterprise, Huawei, IBM, Imperas Software, ICT, IIT Madras, Lattice Semiconductor, Mellanox Technologies, Microsemi, Micron, Nvidia, NXP, Oracle, Qualcomm, Rambus Cryptography Research, Western Digital, and SiFive.[16][17]Awards[edit]2017: The Linley Group's Analyst's Choice Award for Best Technology (for the instruction set)[18]Motivating requirements[edit]The designers claim that the instruction set is the main interface in a computer, because it lies between the hardware and the software. If a good instruction set was open, available for use by all, it should dramatically reduce the cost of software by permitting far more reuse. It should also increase competition among hardware providers, who can use more resources for design and less for software support.[10]The designers assert that new principles are becoming rare in instruction set design, as the most successful designs of the last forty years have become increasingly similar. Of those that failed, most did so because their sponsoring companies failed commercially, not because the instruction sets were poor technically. So, a well-designed open instruction set designed using well-established principles should attract long-term support by many vendors.[10]Most prior open ISAs used the GNU General Public License (GPL), encouraging users to open their implementations to copying and use by others.Unlike other academic designs which are optimized only for simplicity of exposition, the designers state that the RISC-V instruction set is for practical computers. It is said to have features to increase computer speed, yet reduce cost and power use. These include a load/store architecture, bit patterns to simplify the multiplexers in a CPU, simplified standards-based floating-point, a design that is architecturally neutral, and placing most-significant bits at a fixed location to speed sign-extension. Sign extension is said to often be on the critical timing path.[4]The instruction set is designed for a wide range of uses. It supports three word-widths, 32, 64, and 128 bits, and a variety of subsets. The definitions of each subset vary slightly for the three word-widths. The subsets support small embedded systems, personal computers, supercomputers with vector processors, and warehouse-scale rack-mounted parallel computers.The instruction set is variable-width and extensible, so that more encoding bits can always be added. Space for the 128-bit stretched version of the ISA was reserved, because 60 years of industry experience has shown that the most unrecoverable error in instruction set design is a lack of memory address space. As of 2016, the 128-bit ISA remains undefined intentionally, because there is yet so little practical experience with such large memory systems.[4]However, RISC-V also supports the designers' academic uses. The simplicity of the integer subset permits basic student exercises. The integer subset is a simple ISA enabling software to control research machines. The variable-length ISA enables extensions for both student exercises and research.[4]The separated privileged instruction set permits research in operating system support, without redesigning compilers.[19]RISC-V's open intellectual property allows its designs to be published, reused, and modified.[4]Software[edit]A normal problem for a new instruction set is a lack of CPU designs and software.The RISC-V website has a specification for user-mode instructions, and a preliminary specification for a general-purpose privileged instruction set, to support operating systems.[20]There are several open-sourced CPU designs, including the 64-bit Berkeley Out of Order Machine (BOOM),[21]64-bit Rocket,[22]five 32-bit Sodor CPU designs from Berkeley,[23]picorv32 by Clifford Wolf, scr1 from Syntacore, PULPino (Riscy and Zero-Riscy) from ETH Zürich / University of Bologna,[24]and others. The three-stage Sodor CPU appears apt for a small embedded CPU. Rocket may suit compact, low-power intermediate computers such as personal devices. BOOM utilizes much of the infrastructure created for Rocket, and may be usable for personal, supercomputer and warehouse-scale computers. Both picorv and scr1 are 32-bit MCU class RV32IMC implementations in Verilog. The cores in PULPino implement a simple RV32IMC ISA for micro-controllers (Zero-Riscy) or a more powerful RV32IMFC ISA with custom DSP extensions for embedded signal processing.The design software includes a design compiler, Chisel,[25]which can reduce the designs to Verilog for use in devices. The website includes verification data for testing core implementations.Available RISC-V software tools include a GNU Compiler Collection (GCC) toolchain (with GDB, the debugger), an LLVM toolchain, the OVPsim simulator (and library of RISC-V Fast Processor Models), the Spike simulator, and a simulator in QEMU.Operating system support exists for the Linux kernel, FreeBSD, and NetBSD, but the supervisor-mode instructions are unstandardized as of 10 November 2016,[19]so this support is provisional. The preliminary FreeBSD port to the RISC-V architecture was upstreamed in February 2016, and shipped in FreeBSD 11.0.[26][27]Ports of Debian[28]and Fedora[29]are stabilizing. There is a port of U-Boot.[30]UEFI Spec v2.7 has defined the RISC-V binding and a tianocore port has been done by HPE engineers and is expected to be upstreamed. There is a preliminary port of the seL4 microkernel.[31][32]There is also a simulator to run a RISC-V Linux system on a web browser using JavaScript.[citation needed]Adopters[edit]Commercial[edit]SiFive, a company established specifically for developing RISC-V hardware, has processor models released in 2017.[33][34] These include a quad-core RISC-V SoCa quad-core, 64-bit SoC.[35]Syntacore,[36] a founding member of the RISC-V Foundation and one of the first commercial RISC-V IP vendors, develops and licenses family of RISC-V IP since 2015. As of 2018, product line includes eight 32 and 64 bit cores, including open-source SCR1 MCU core.[37] First commercial SoCs, based on the Syntacore IP were demonstrated in 2016.[38]Andes Technology Corporation, a founding member of the RISC-V Foundation[39] which joined the consortium in 2016, released its first two RISC-V cores in 2017. The cores, the N25 and NX25, come with a complete design ecosystems and a number of RISC-V partners. Andes is actively driving the development of RISC-V ecosystem and expects to release several new RISC-V products in 2018.Codasip and UltraSoC have developed fully supported intellectual property for RISC-V embedded SOCs that combine Codasip's RISC-V cores and other IP with UltraSoC's debug, optimization and analytics.[40]Imperas has developed a family of fast processor models for the different subsets of RV32GC and RV64GC ISA variants that are part of the OVPsim instruction accurate simulator distributions used for embedded software development.GreenWaves Technologies announced the availability of GAP8, a 32-bit 1 controller plus 8 compute cores, 32-bit SoC and developer board in February 2018. Their GAPuino GAP8 development board started shipping in May 2018.[41][42][43]Hex Five announced general availability MultiZone Security - the first RISC-V Trusted Execution Environment utilizing the standard RISC-V ISA and privileged mode extensions.[44]CloudBEAR is a processor IP company that develops its own RISC-V cores for a range of applications.[45]In development[edit]Indian Institute of Technology Madras is developing six RISC-V open-source CPU designs for six distinct uses, from a small 32-bit CPU for the Internet of Things(IoT) to large, 64-bit CPUs designed for warehouse-scale computers such as server farms based on RapidIO and Hybrid Memory Cube technologies.[6][46]Nvidia plans to use RISC-V to replace their Falcon processor on their GeForce graphics cards.[47]ASTC developed a RISC-V CPU for embedded ICs.[48]Adapteva plans to use RISC-V, in a successor to their manycore accelerator product[49]lowRISC[50] is a non profit project that aims to implement a fully open-source system on a chip (SoC) based on the 64-bit RISC-V ISA.Computer Laboratory, University of Cambridge, in collaboration with the FreeBSD Project, has ported the FreeBSD operating system to 64-bit RISC-V to use as a hardware-software research platform.[27]ETH Zurich and the University of Bologna have cooperatively developed the open-source RISC-V PULPino processor[51] as part of the PULP (Parallel Ultra-Low Power) project for energy-efficient IoT computing.[52]Western Digital announced a plan to embed RISC-V processors in their future products.[53]Esperanto Technologies has disclosed that they are developing three RISC-V based processors: ET-Maxion – a high-performance RISC-V core, ET-Minion – an energy-efficient RISC-V core, and ET-Graphics – a RISC-V-based graphics processor.[54]Design[edit]ISA base and extensions[edit]RISC-V has a modular design, consisting of alternative base parts, with additional optional extensions. The ISA base and its extensions are developed in a collective effort between industry, the research community and educational institutions. The base specifies instructions (and their encoding), control flow, registers (and their sizes), memory and addressing, logic (i.e., integer) manipulation, and ancillaries. The base by itself can implement a simplified general-purpose computer, with full software support, including a general-purpose compiler.The standard extensions are specified to work with all of the standard bases, and with each other without conflict.Many RISC-V computers might implement the compact extension to reduce power consumption, code size, and memory usage.[4]There are also future plans to support hypervisors and virtualization.[19]Together with a supervisor instruction set extension, S, an RVGC defines all instructions needed to conveniently support a Unix-style operating system.NameDescriptionVersionStatus[a]BaseRV32IBase Integer Instruction Set, 32-bit2.0FrozenRV32EBase Integer Instruction Set (embedded), 32-bit, 16 registers1.9OpenRV64IBase Integer Instruction Set, 64-bit2.0FrozenRV128IBase Integer Instruction Set, 128-bit1.7OpenExtensionMStandard Extension for Integer Multiplication and Division2.0FrozenAStandard Extension for Atomic Instructions2.0FrozenFStandard Extension for Single-Precision Floating-Point2.0FrozenDStandard Extension for Double-Precision Floating-Point2.0FrozenGShorthand for the base and above extensionsN/AN/AQStandard Extension for Quad-Precision Floating-Point2.0FrozenLStandard Extension for Decimal Floating-Point0.0OpenCStandard Extension for Compressed Instructions2.0FrozenBStandard Extension for Bit Manipulation0.36OpenJStandard Extension for Dynamically Translated Languages0.0OpenTStandard Extension for Transactional Memory0.0OpenPStandard Extension for Packed-SIMD Instructions0.1OpenVStandard Extension for Vector Operations0.2OpenNStandard Extension for User-Level Interrupts1.1Open^ Frozen parts are expected to have their final feature set and to receive only clarifications before being ratified.To tame the combinations of functionality that may be implemented, a nomenclature is defined to specify them.[4]The instruction set base is specified first, coding for RISC-V, the register bit-width, and the variant; e.g., RV64I or RV32E. Then follows letters specifying implemented extensions in canonical order (as above). The base, extended integer and floating point calculations, and synchronisation primitives for multi-core computing, the base and extensions MAFD, are considered to be necessary for general purpose computation, and thus have the shorthand, G.A small 32-bit computer for an embedded system might be RV32EC. A large 64-bit computer might be RV64GC; i.e., shorthand for RV64IMAFDC.A naming scheme with Zxxx for standard extensions and Yxxx for non-standard (vendor-specific) extensions has been proposed. For example, the Ztso extension for total store ordering, an alternative memory consistency model to weak memory ordering is under discussion.[55]Register sets[edit]RISC-V has 32 (or 16 in the embedded variant) integer registers, and, when the floating point extension is implemented, 32 floating-point registers. Except for memory access instructions, instructions address only registers.The first integer register is a zero register, and the remainder are general purpose registers. A store to the zero register has no effect, and a read always provides 0. Using the zero register as a placeholder makes for a simpler instruction set. E.g., move rx to ry becomes add r0 to rx and store in ry.[4]Control and status registers exist, but user-mode programs can access only those used for performance-measurement and floating point management.There are no instructions to save and restore multiple registers. Those were thought to be unnecessary, too complex and perhaps too slow.[4]Memory access[edit]Like many RISC designs, RISC-V is a load/store architecture; instructions address only registers, with load and store instructions conveying to and from memory.[4]Memory consists of and is addressed as 8-bit bytes, with words being in little-endian order.[4]Words, up to the register size, can be accessed with the load and store instructions.Accessed memory addresses need not be aligned to their word-width, but accesses to aligned addresses will always be the fastest. As, for example, simple CPUs may implement unaligned accesses with slow software emulation driven from an alignment failure interrupt.[4]RISC-V manages memory systems that are shared between CPUs or threads by ensuring a thread of execution always sees its memory operations in the programmed order. But between threads and I/O devices, RISC-V is simplified: It doesn't guarantee the order of memory operations, except by specific instructions, such as fence.A fence instruction guarantees that the results of predecessor operations are visible to successor operations of other threads or I/O devices. fence can guarantee the order of combinations of both memory and memory-mapped I/O operations. E.g. it can separate memory read and write operations, without affecting I/O operations. Or, if a system can operate I/O devices in parallel with memory, fence doesn't force them to wait for each other. A single CPU with one thread may decode fence as nop.Like many RISC instruction sets (and indeed some complex instruction set computer (CISC) instruction sets, such as x86 and IBM System/360 families), RISC-V lacks address-modes that write back to the registers. For example, it does not do auto-incrementing.[4]RISC-V is little-endian to resemble other familiar, successful computers, for example, x86. This also reduces a CPU's complexity and costs slightly because it reads all sizes of words in the same order. For example, the RISC-V instruction set decodes starting at the lowest-addressed byte of the instruction. The specification leaves open the possibility of non-standard big-endian or bi-endian systems.[4]Some RISC CPUs (such as MIPS, PowerPC, DLX, and Berkeley's RISC-I) place 16 bits of offset in the loads and stores. They set the upper 16 bits by a "load upper word" instruction. This permits upper-halfword values to be set easily, without shifting bits. However, most use of the upper half-word instruction makes 32-bit constants, like addresses. RISC-V uses a SPARC-like combination of 12-bit offsets and 20-bit "set upper" instructions. The smaller 12-bit offset helps compact, 32-bit load and store instructions select two of 32 registers yet still have enough bits to support RISC-V's variable-length instruction coding.[4]Immediates[edit]RISC-V handles 32-bit constants and addresses with instructions that set the upper 20 bits of a 32-bit register. Load upper immediate lui loads 20 bits into bits 31 through 12. Another instruction, auipc generates the same 20 upper address bits by adding an offset to the program counter and storing the result into a base register. This permits position-independent code to have 32-bit addresses relative to the program counter. The base register can be used as-is with the 12-bit offsets of the loads and stores. If needed, addi can set the lower 12-bits of a register. In 64-bit ISAs,lui and auipc sign-extend the result to 64 bits.[4]Some fast CPUs may interpret combinations of instructions as single "fused" instructions. lui or auipc may be good candidates to fuse with loads or stores.Subroutine calls, jumps, and branches[edit]RISC-V's subroutine call jal (jump and link) places its return address in a register. This is faster in many computer designs, because it saves a memory access compared to systems that push a return address directly on a stack in memory. jal has a 20-bit signed (2's complement) offset. The offset is multiplied by 2, then added to the PC to generate a relative address to a 32-bit instruction. If the result is not at a 32-bit address (i.e. evenly divisible by 4), the CPU may force an exception.[4]RISC-V CPUs jump to calculated addresses using a "jump and link-register", jalr instruction. jalr is similar to jal, but gets its destination address by adding a 12-bit offset to a base register. (In contrast,jal adds a larger 20-bit offset to the PC.)jalr's bit format is like the register-relative loads and stores. Like them, jalr can be used with the instructions which set the upper 20 bits of a base register in order to make 32-bit branches, either to an absolute address (using lui) or a PC-relative one (using auipc for position-independent code). (Using a constant zero base address allows single-instruction calls to a small (the offset), fixed positive or negative address.)RISC-V recycles jal and jalr to get unconditional 20-bit PC-relative jumps and unconditional register-based 12-bit jumps. Jumps just make the linkage register 0 so that no return address is saved.[4]RISC-V also recycles jalr to return from a subroutine: To do this, jalr's base register is set to be the linkage register saved by jal or jalr. jalr's offset is zero and the linkage register is zero, so that there is no offset, and no return address is saved.Like many RISC designs, in a subroutine call, a RISC-V compiler must use individual instructions to save registers to the stack at the start, and then restore these from the stack on exit. RISC-V has no "save multiple" or "restore multiple" register instructions. These were thought to make the CPU too complex, and possibly slow, as well.[56]This can take more code space. Designers planned to reduce code size with library routines to save and restore registers.[57]RISC-V has no condition code register or carry bit. The designers believed that condition codes make fast CPUs more complex by forcing interactions between instructions in different stages of execution. This choice makes multiple-precision arithmetic more complex. Also, a few numerical tasks need more energy.[4]Instead, RISC-V has short branches that perform comparisons: equal, not-equal, less-than, unsigned less-than, greater-than and unsigned greater-than. Ten comparison-branch operations are implemented with only six instructions, by reversing the order of operands in the assembler. For example, "branch if greater than" can be done by "less than or equal" with a reversed order of operands.[4]The comparing branches have a twelve-bit signed range, and jump relative to the PC.[4]RISC-V's ISA requires default branch predictions for CPUs: Backward conditional branches should be predicted "taken". Forward conditional branches predict "not taken". The predictions are easy to decode in a pipelined CPU: Branch addresses are signed numbers added to the PC. Backward branches have negative two's complement addresses, and therefore have a one in the most significant bit of the address. Forward branches have a zero. The most significant bit is in a fixed location in the operation code in order to speed up the pipeline. Complex CPUs can add branch predictors to work well even with unusual data or situations.The ISA manual recommends that software be optimized to avoid branch stalls by using the default branch predictions. This reuses the most significant bit of the signed relative address as a "hint bit" to tell whether the conditional branch will be taken or not. So, no other hint bits are needed in the operation codes of RISC-V branches. This makes more bits available in the branch operation codes. Simple, inexpensive CPUs can merely follow the default predictions and still perform well with optimizing compilers. Compilers can still perform statistical path optimization, if desired.[4]To avoid unnecessary loading of branch prediction electronics, (and therefore unnecessary pipeline stalls) the comparing branch codes should never be used for unconditional jumps.[4]RISC-V does not support predication (the conditional execution of instructions) as its designers claim that CPUs without predication are easier to design, and optimizing compilers are less likely to mistakenly use predication where it should not be used. The designers claim that very fast, out-of-order CPU designs do predication anyway, by doing the comparison branch and conditional code in parallel, then discarding the unused path's effects. They also claim that even in simpler CPUs, predication is less valuable than branch prediction, which can prevent most stalls associated with conditional branches. Code without predication is larger, with more branches, but they also claim that a compressed instruction set (such as RISC-V's set "C") solves that problem in most cases.[4]Many RISC designs have included a branch delay slot, a position after a branch instruction that can be filled with an instruction which is executed whether or not the branch is taken. This feature can improve the performance of pipelined CPUs by absorbing some of the time wasted if a CPU mispredicts the operation of a conditional branch, and the CPU's pipeline stalls. RISC-V omits a branch delay slot because it complicates multicycle CPUs, superscalar CPUs, and long pipelines. Dynamic branch predictors have succeeded well enough to reduce the need for delayed branches.[4]Arithmetic and logic sets[edit]RISC-V segregates math into a minimal set of integer instructions (set "I") with add, subtract, shift, bit-wise logic and comparing-branches. These can simulate most of the other RISC-V instruction sets with software. (The atomic instructions are a notable exception.) RISC-V currently lacks the "count leading zero" and bit-field operations normally used to speed software floating-point in a pure-integer processor.The integer multiplication instructions (set "M") includes signed and unsigned multiply and divide. Double-precision integer multiplies and divides are included, as multiplies and divides that produce the "high word" of the result. The ISA document recommends that implementors of CPUs and compilers "fuse" a standardized sequence of high and low multiply and divide instructions to a single operation if possible.[4]The floating-point instructions (set "F") includes single-precision arithmetic and also comparison-branches similar to the integer arithmetic. It requires an additional set of 32 floating-point registers. These are separate from the integer registers. The double-precision floating point instructions (set "D") generally assume that the floating-point registers are 64-bit (i.e. double-width), and the "F" subset is coordinated with the "D" set. A quad-precision 128-bit floating-point ISA ("Q") is also defined. RISC-V computers without floating-point can use a floating-point software library.[4]RISC-V does not cause exceptions on arithmetic errors, including overflow, underflow, subnormal and divide by zero. Instead, both integer and floating-point arithmetic produce reasonable default values and set status bits. Divide-by-zero can be discovered by a single branch after the division. The status bits can be tested by an operating system or periodic interrupt.[4]Atomic memory operations[edit]RISC-V supports computers that share memory between multiple CPUs and threads. RISC-V's standard memory consistency model is release consistency. That is, loads and stores may generally be reordered, but some loads may be designated as "acquire" operations which must precede later memory accesses, and some stores may be designated as "release" operations which must follow earlier memory accesses.[4]The base instruction set includes minimal support in the form of a fence instruction to enforce memory ordering. Although this is sufficient (fence r, rw provides "acquire" and fence rw, w provides "release"), combined operations can be more efficient.[4]The atomic memory operation extension supports two types of atomic memory operations for release consistency. First, it provides general purpose "load-reserved" lr and "store-conditional" sc instructions. lr performs a load, and tries to reserve that address for its thread. A later store-conditional sc to the reserved address will be performed only if the reservation is not broken by an intervening store from another source. If the store succeeds, a zero is placed in a register. If it failed, a non-zero value indicates that software needs to retry the operation. In either case, the reservation is released.[4]The second group of atomic instructions perform read-modify-write sequences: a load (which is optionally a load-acquire) to a destination register, then an operation between the loaded value and a source register, then a store of the result (which may optionally be a store-release). Making the memory barriers optional permits combining the operations. The optional operations are enabled by "acquire" and "release" bits which are present in every atomic instruction. RISC-V defines nine possible operations: swap (use source register value directly); add; bitwise and, or, and exclusive-or; and signed and unsigned minimum and maximum.[4]A system design may optimize these combined operations more than lr and sc. For example, if the destination register for a swap is the constant zero, the load may be skipped. If the value stored is unmodified since the load, the store may be skipped.[4]The IBM System/370 and its successors including z/Architecture, and x86, both implement a compare-and-swap (cas) instruction, which tests and conditionally updates a location in memory: if the location contains an expected old value, cas replaces it with a given new value; it then returns an indication of whether it made the change. However, a simple load-type instruction is usually performed before the cas to fetch the old value. The classic problem is that if a thread reads (loads) a value "A", calculates a new value "C", and then uses (cas) to replace "A" with "C", it has no way to know whether concurrent activity in another thread has replaced "A" with some other value "B" and then restored the "A" in between. In some algorithms (e.g., ones in which the values in memory are pointers to dynamically allocated blocks), this ABA problem can lead to incorrect results. The most common solution employs a "double-wide cas" instruction to update both the pointer and an adjacent counter; unfortunately, such an instruction requires a special instruction format to specify multiple registers, performs several reads and writes, and can have complex bus operation.[4]The lr/sc alternative is more efficient. It usually requires only one memory load, and minimizing slow memory operations is desirable. It's also exact: it controls all accesses to the memory cell, rather than just assuring a bit pattern. However, unlike cas, it can permit livelock, in which two or more threads repeatedly cause each other's instructions to fail. RISC-V guarantees forward progress (no livelock) if the code follows rules on the timing and sequence of instructions: 1) It must use only the "I" subset. 2) To prevent repetitive cache misses, the code (including the retry loop) must occupy no more than 16 consecutive instructions. 3) It must not include any system or fence instructions, or taken backward branches between the lr and sc. 4) The backward branch to the retry loop must be to the original sequence.[4]The specification gives examples of how to use this subset to lock a data structure.[4]Compressed subset[edit]The standard RISC-V ISA specifies that all instructions are 32 bits. This makes for a particularly simple implementation, but like other RISC processors with such an instruction encoding, results in larger code size than in other instruction sets.[4][56]To compensate, RISC-V's "32-bit" instructions are actually 30 bits; ​3⁄4 of the opcodespace is reserved for an optional (but recommended) variable-length "compressed" instruction set, RVC, that includes 16-bit instructions. Like ARM's Thumb and the MIPS16, the compressed instructions are simply aliases for a subset of the larger instructions. Unlike ARM's Thumb or the MIPS compressed set, space was reserved from the beginning so there is no separate operating mode. Standard and compressed instructions may be intermixed freely.[4][56](letter "C")[57]Because (like Thumb-1 and MIPS16) the compressed instructions are simply alternate encodings (aliases) for a selected subset of larger instructions, the compression can be implemented in the assembler, and it is not essential for the compiler to even know about it.A prototype of RVC was tested in 2011.[56]The prototype code was 20% smaller than an x86 PC and MIPS compressed code, and 2% larger than ARM Thumb-2code.[56]It also substantially reduced both the needed cache memory and the estimated power usage of the memory system.[56]The researcher intended to reduce the code's binary size for small computers, especially embedded computer systems. The prototype included 33 of the most frequently used instructions, recoded as compact 16-bit formats using operation codes previously reserved for the compressed set.[56]The compression was done in the assembler, with no changes to the compiler. Compressed instructions omitted fields that are often zero, used small immediate values or accessed subsets (16 or 8) of the registers. addi is very common and often compressible.[56]Much of the difference in size compared to Arm's Thumb set occurred because RISC-V, and the prototype, have no instructions to save and restore multiple registers. Instead, the compiler generated conventional instructions that access the stack. The prototype RVC assembler then often converted these to compressed forms that were half the size. However, this still took more code space than the ARM instructions that save and restore multiple registers. The researcher proposed to modify the compiler to call library routines to save and restore registers. These routines would tend to remain in a code cache and therefore run fast, though probably not as fast as a save-multiple instruction.[56]Embedded subset[edit]An instruction set for the smallest "embedded" CPUs (set "E") is reduced in other ways: Only 16 of the 32-bit integer registers are supported. Floating-point instructions should not be supported (the specification forbids it as uneconomical), so a floating-point software library must be used.[4]The compressed set "C" is recommended. The privileged instruction set supports only machine mode, user mode and memory schemes that use base-and-bound address relocation.[19]There is also discussion of a microcontroller profile for RISC-V, to ease development of deeply embedded systems. It centers on faster, simple C-language support for interrupts, simplified security modes and a simplified POSIX application binary interface.[58]Correspondents have also proposed smaller, non-standard, 16-bit "RV16E" ISAs: One would use 16 × 16-bit integer registers, using the standard "EIMC" ISAs (including 32-bit instructions.)[59]Another proposal would only use the 16-bit "C" instructions with 8 × 16-bit registers. A full RV16EG was said to be possible with a completely recoded ISA.[60]Privileged instruction set[edit]The privileged instruction set specification of RISC-V is currently (July 2017) preliminary.[19]RISC-V's ISA includes a separate privileged instruction set specification.Version 1.10 of the specification supports several types of computer systems:Systems that have only "machine mode", perhaps for embedded systems,Systems with both machine mode (for the supervisor) and user-mode, perhaps to implement Linux.Systems with machine-mode, hypervisors, multiple supervisors, and user-modes under each supervisor.These correspond roughly to systems with up to four "rings" of privilege and security, at most: machine, hypervisor, supervisor and user. Each layer also is expected to have a thin layer of standardized supporting software that communicates to a more-privileged layer, or hardware.[19]The overall plan for this ISA is to make the hypervisor mode orthogonal to the user and supervisor modes.[61]The basic feature is a configuration bit that either permits supervisor-level code to access hypervisor registers, or causes an interrupt on accesses. This bit lets supervisor mode directly handle the hardware needed by a hypervisor. This simplifies a type 2 hypervisor, hosted by an operating system. This is a popular mode to run warehouse-scale computers. To support type 1, unhosted hypervisors, the bit can cause these accesses to interrupt to a hypervisor. The bit simplifies nesting of hypervisors, in which a hypervisor runs under a hypervisor. It's also said to simplify supervisor code by letting the kernel use its own hypervisor features with its own kernel code. As a result, the hypervisor form of the ISA supports five modes: machine, supervisor, user, supervisor-under-hypervisor and user-under-hypervisor.The privileged instruction set specification explicitly defines "hardware threads", or "harts". Multiple hardware threads are a common practice in larger, more powerful computers. When one thread is stalled, waiting for memory, others can often proceed. Hardware threads can help make better use of the large number of registers and execution units in large CPUs. Finally, hardware threads can be a simple, powerful way to handle interrupts: No saving or restoring of registers is required, simply executing a different hardware thread. The only hardware thread required in a RISC-V computer is thread zero.[19]The existing control and status register definitions support RISC-V's error and memory exceptions, and a small number of interrupts. For larger systems, the specification also defines an interrupt controller. Interrupts always start at the highest-privileged machine level, and the control registers of each level have explicit "forwarding" bits to route interrupts to less-privileged code. For example, the hypervisor need not include software that executes on each interrupt to forward an interrupt to an operating system. Instead, on set-up, it can set bits to forward the interrupt.[19]Several memory systems are supported in the specification. Physical-only is suited to the smallest embedded systems. There are also three UNIX-style virtual memorysystems for memory cached in mass-storage systems. The virtual memory systems have three sizes, with addresses sized 32, 39 and 48-bits. All virtual memory systems support 4 KiB pages, multilevel page-table trees and use very similar algorithms to walk the page table trees. All are designed for either hardware or software page-table walking. To optionally reduce the cost of page table walks, super-sized pages may be leaf pages in higher levels of a system's page table tree. SV32 has a two layer page table tree and supports 4 MiB superpages. SV39 has a three level page table, and supports 2 MiB superpages and 1 GiB gigapages. SV48 is required to support SV39. It also has a 4-level page table and supports 2 MiB superpages, 1 GiB gigapages, and 512 GiB terapages. Superpages are aligned on the page boundaries for the next-lowest size of page.[19]Bit manipulation[edit]Substantial work was done to produce a preliminary, though unapproved, bit manipulation ("B") ISA for RISC-V. Done well, a bit manipulation subset can aid cryptographic, graphic and mathematical operations. The criteria for inclusion documented in the draft were compliance with RV5 philosophies and ISA formats, substantial improvements in code density or speed (i.e. at least a 3-for-1 reduction in instructions), and substantial real-world applications, including preexisting compiler support. Version 0.36 included[62]uncontroversial instructions to count leading zeros, count one bits, perform 'and' with complement, shift ones, rotate, a generalized bit-reverse and shuffle, byte swaps, bit extracts and deposits, and some bit manipulation additions for the compressed set (not, neg and reverse). It also includes a controversial proposal for bit-field extract and place, using a nonstandard 48-bit instruction format.Packed SIMD[edit]For simple, cost-reduced RISC-V systems, there is a proposal to use the floating-point registers' bits to perform parallel SIMD sub-word arithmetic. This is widely used to accelerate multimedia and other digital signal processing.[4]This ISA is (as of 2016) undefined, but could resemble PA-RISC's multimedia instructions. Besides its native 64-bit math, the PA-RISC MAX2 CPU could do arithmetic on four 16-bit subwords at a time, with several overflow methods. It also could move subwords to different positions. PA-RISC's MAX2 was intentionally simplified. It lacked support for 8-bit or 32-bit subwords. The 16-bit subword size was chosen to support most digital signal processing tasks. These instructions were inexpensive to design and build. However, they increased the CPU's performance on digital signal processing tasks by 48-fold or more, enabling practical real-time video codecs in 1995.[63][64]Vector set[edit]The proposed vector processing instruction set may make the packed SIMD set obsolete. The designers hope to have enough flexibility so that a CPU can implement vector instructions in a standard processor's registers. This would enable minimal implementations with similar performance to a multimedia ISA, (as above.) However, a true vector coprocessor could execute the same code with higher performance.[65]The vector processing proposal is currently (29 June 2015) a conservative, flexible design of a general-purpose mixed-precision vector processor (suitable for executing compute kernels). Code would port easily to CPUs with differing vector lengths, ideally without recompiling.[65]In contrast, short-vector SIMD extensions are less convenient. These are used in x86, ARM and PA-RISC. In these, a change in word-width forces a change to the instruction set to expand the vector registers (in the case of x86, from 64-bit MMX registers to 128-bit SSE, to 256-bit AVX, and AVX-512). The result is not only a growing instruction set, but also a need to port working code to the new instructions.In the RISC-V vector ISA, rather than fix the vector length in the architecture, an instruction (setvl) is available which takes a requested size and sets the vector length to the minimum of the hardware limit and the requested size. So, the RISC-V proposal is more like a Cray's long-vector design. That is, each vector in up to 32 vectors is the same length.[65]The application specifies the total vector width it requires, and the processor determines the vector length it can provide with available on-chip resources. This takes the form of an instruction (vsetcfg) with four immediate operands, specifying the number of vector registers of each available width needed. The total must be no more than the addressable limit of 32, but may be less if the application does not require them all. The vector length is limited by the available on-chip storage divided by the number of bytes of storage needed for each entry. (Added hardware limits may also exist, which in turn may permit SIMD-style implementations.)[65]Outside of vector loops, the application can request zero vector registers, saving the operating system the work of preserving them on context switches.[65]The vector length is not only architecturally variable, but designed to vary at run time also. To achieve this flexibility, the instruction set is likely to use variable-width data paths and variable-type operations using polymorphic overloading.[65]The plan is that these can reduce the size and complexity of the ISA and compiler.[65]Recent experimental vector processors with variable-width data paths also show profitable increases in operations per: second (speed), area (lower cost), and watt (longer battery life).[66]Unlike a typical modern graphics processing unit, there are no plans to provide special hardware to support branch predication. Instead, lower cost compiler-based predication will be used.[65][67]External debug system[edit]There is a preliminary specification for RISC-V's hardware-assisted debugger. The debugger will use a transport system such as Joint Test Action Group (JTAG) or Universal Serial Bus (USB) to access debug registers. A standard hardware debug interface may support either a "standardized abstract interface" or "instruction feeding".[68][69]The exact form of the "abstract interface" remains undefined (January 2017), but proposals include a memory mapped system with standardized addresses for the registers of debug devices or a command register and a data register accessible to the communication system.[68]Correspondents claim that similar systems are used by Freescale's BDM for some CPUs, ARM, OpenRISC and Aeroflex's LEON.[68]In "instruction feeding", the CPU will process a debug exception to execute individual instructions written to a register. This may be supplemented with a data-passing register and a module to directly access the memory. Instruction feeding lets the debugger access the computer exactly as software would. It also minimizes changes in the CPU, and adapts to many types of CPU. This was said to be especially apt for RISC-V because RISC-V is explicitly designed for many types of computers. The data-passing register allows a debugger to write a data-movement loop to RAM, and then execute the loop to move data into or out of the computer at a speed near the maximum speed of the debug system's data channel.[68]Correspondents claim that similar systems are used by MIPs, Intel Quark, Tensilica's Xtensa and for FreescalePower CPUs' BDM.[68]

What are the categories of programming languages?

Well, I am certainly confused that to what extent you need the categorization of programming languages… When your question has no such boundation, let me articulate the answer in depth.So, what’s a programming language?A programming language is a formal computer language or constructed language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs to control the behavior of a machine or to express algorithms.What are the various categories of programming languages?You will be surprised to know that programming languages can be categorized into 47 different types along with more sub-types. Lets explore…Array Language: Array programming (also known as vector or multidimensional languages) generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays.A+AnalyticaAPLChapelFortranFreematGAUSSJJuliaKMATLABOctaveQRSS-LangSequenceLX10ZPLIDLWolfram LanguageAssembly Languages: Assembly languages directly correspond to a machine language so machine code instructions appear in a form understandable by humans. Assembly languages let programmers use symbolic addresses, which the assembler converts to absolute addresses. Most assemblers also support macros and symbolic constants.Authoring Languages:Bigwig (web design and development language)PILOTTUTORLassoConstraint Programming:AIMMS, an algebraic modeling language with support for constraint programming.Alma-0 a small, strongly typed, constraint language with a limited number of features inspired by logic programming, supporting imperative programming.AMPL, an algebraic modeling language with support for constraint programming.Babelsberg a family of object-constraint programming languages for Ruby, JavaScript, Squeak, and Python.Bertrand a language for building constraint programming systems.Common Lisp via Screamer (a free software library which provides backtracking and CLP(R), CHiP features).Constraint Handling RulesMiniZinc (a high-level constraint programming system, BSD-style license)Kaleidoscope, an object-oriented imperative constraint programming language.OzClaireCurry (Haskell based, with free implementations)SystemVerilog Computer hardware simulation language has built in constraint solver.Wolfram LanguageCommand Line Interface Languages: Command-line interface (CLI) languages are also called batch languages, or job control languages. Examples:4DOS (extended command-line shell for IBM PCs)bash (the Bourne-Again shell from GNU/FSF)CHAIN (Datapoint)CLIST (MVS Command List)CMS EXECcsh and tcsh (C-like shell from Bill Joy at UC Berkeley)DCL DIGITAL Command Language – standard CLI language for VMS (DEC, Compaq, HP)DOS batch language (standard CLI/batch language for the IBM PC running DOS operating systems, popular before Windows)EXEC 2Expect (a Unix automation and test tool)fish (a Unix shell)Hamilton C shell (a C shell for Windows)JCL (punch card-oriented batch control language for IBM System/360 family mainframes)ksh (a standard Unix shell, written by David Korn)Rc (command-line shell for Plan 9)REXXsh (the standard Unix shell, written by Stephen R. Bourne)TACL (Tandem Advanced Command Language)Windows batch language (Windows batch file language as understood by Command and CMD.EXE)Windows PowerShell (Microsoft .NET-based CLI)zsh (a Unix shell)Compiled Languages: These are languages typically processed by compilers, though theoretically any language can be compiled or interpreted.ActionScriptAda (multi-purpose language)ALGOL (extremely influential language design – the second high level language compiler)SMALL Machine Algol Like LanguageAteji PX, an extension of the Java language for parallelismBASIC (some dialects, including the first version of Dartmouth BASIC)BCPLBlueC (one of the most widely used procedural programming languages)C++ (One of the most widely used object-oriented (OO) languages specially used in large-scale, highly complex, high-performance software systems)C# (compiled into intermediate language, which generates a native image at runtime)Ceylon (compiled into JVM bytecode)CLIPPER 5.3 (programming Language for DOS-based software)CLEO (Clear Language for Expressing Orders) used the compiler for the British Leo computersCOBOLCobraCommon LispCrystalCurlD (Attempts a "C++ done right" philosophy)DASL compiles into Java, JavaScript, JSP, Flex, etc., which are further compiled into a .war fileDelphi (Borland's Object Pascal development system)DIBOL (Digital Interactive Business Oriented Language)DylaneCEiffel (object-oriented language developed by Bertrand Meyer)SatherUbercodeEmacs LispErlangF# (compiled into intermediate language, which generates a native image at runtime)FactorForth (professional systems, like VFX and SwiftForth)Fortran (the first high-level, compiled language, from IBM, John Backus, et al.)GAUSSGoGosu (compiled into JVM bytecode)Groovy (compiled into JVM bytecode)HaskellHarbourJava (usually compiled into JVM bytecode although ahead-of-time (AOT) compilers exist that compile to machine code)JOVIALLabVIEWMercuryNemerle (compiled into intermediate language bytecode)NimObjective-CPascal (most implementations)PlusPython (compiled into intermediate Virtual Machine bytecode)RPG (Report Program Generator)RustScala (compiled into JVM bytecode)Scheme (some implementations, e.g. Gambit)SequenceL – purely functional, automatically parallelizing and race-freeSimula (the first object-oriented language, developed by Ole-Johan Dahl and Kristen Nygaard)Smalltalk generally compiled to platform independent bytecode that runs on a Virtual MachineSwiftMLStandard ML (SML)AliceOCamlTuringVala (Compiler for the GObject type system)Visual Basic (Earlier versions compiled directly to a native runtime. Recent .NET versions compile into intermediate language that is just-in-time compiled into a native image at runtime.)Visual FoxProVisual PrologX++X#XLZ++Concurrent Languages: Message passing languages provide language constructs for concurrency. The predominant paradigm for concurrency in mainstream languages such as Java is shared memory concurrency based on monitors. Concurrent languages that make use of message passing have generally been inspired by CSP or the π-calculus, but have had little commercial success, except for Ada and Erlang. Ada is a multipurpose language and concurrent programming is only one option available.Ada (multi-purpose language)Alef – concurrent language with threads and message passing, used for systems programming in early versions of Plan 9 from Bell LabsAteji PX an extension of the Java language for parallelismChucK – domain specific programming language for audio, precise control over concurrency and timingCilk – a concurrent CCω – C Omega, a research language extending C#, uses asynchronous communicationClojure – a dialect of Lisp for the Java virtual machineChapelCo-array FortranConcurrent Pascal (by Brinch-Hansen)CurryE – uses promises, ensures deadlocks cannot occurEiffel (through the SCOOP mechanism, Simple Concurrent Object-Oriented Computation)Erlang – uses asynchronous message passing with nothing sharedGoJavaJoin Java – concurrent language based on JavaX10JuliaJoin-calculusJoule – dataflow language, communicates by message passingLimbo – relative of Alef, used for systems programming in Inferno (operating system)MultiLisp – Scheme variant extended to support parallelismoccam – influenced heavily by Communicating Sequential Processes (CSP)occam-π – a modern variant of occam, which incorporates ideas from Milner's π-calculusOrcOz – multiparadigm language, supports shared-state and message-passing concurrency, and futures, and Mozart Programming System cross-platform OzPict – essentially an executable implementation of Milner's π-calculusRust – actor-basedSALSA – actor language with token-passing, join, and first-class continuations for distributed computing over the InternetScala – implements Erlang-style actors on the JVMSequenceL – purely functional, automatically parallelizing and race-freeSR – research languageUnified Parallel CXProc – XML processing language, enabling concurrencyCurly-bracket Languages: The curly-bracket or curly-brace programming languages have a syntax that defines statement blocks using the curly bracket or brace characters { and }. This syntax originated with BCPL (1966), and was popularized by C (1972). Many curly-bracket languages descend from or are strongly influenced by C. Examples of curly-bracket languages include:ABCL/c+AlefLimboGoAutoHotkeyAWKBbcBCPLC – developed circa 1970 at Bell LabsC++C#CeylonChucK – audio programming languageCilk – concurrent C for multithreaded parallel programmingCyclone – a safer C variantDDartDASL – based on JavaEeCECMAScriptActionScriptECMAScript for XMLJavaScriptJScriptTypeScriptGML (Game Maker Language)GLSLICIJavaProcessingGroovyJoin JavaKotlinTeaX10LPCMSLMELNemerle – combines C# and ML features, provides syntax extension abilitiesPCASTLPerlPHPPicoPikeRRustS-LangScalasedSuperColliderSwiftUnrealScriptWindows PowerShell (Microsoft .NET-based CLI)YorickDataflow Languages: Dataflow programming languages rely on a (usually visual) representation of the flow of data to specify the program. Frequently used for reacting to discrete events or for processing streams of data. Examples of dataflow languages include:Hartmann pipelinesG (used in LabVIEW)LucidMaxOzPrographPure DataReaktorStreamBase StreamSQL EventFlowVEEVHDLVisSimWebMethods FlowData-oriented Languages: Data-oriented languages provide powerful ways of searching and manipulating the relations that have been described as entity relationship tables which map one set of things into other sets. Examples of data-oriented languages include:ClarionClipperdBase a relational database access languageMUMPS (an ANSI standard general purpose language with specializations for database work)Caché (similar to MUMPS)RDQLSPARQLSQLTutorial D – see also The Third ManifestoVisual FoxPro – a native RDBMS engine, object-oriented, RADWebDNAWebQLWolfram LanguageData-structured Languages: Data-structured languages are those where logic is structured in ways similar to their data. Such languages are generally well suited to reflection and introspection. There are three main types:Array-basedList-basedStack-basedAssembly languages that statically link data inline with instructions can also be considered data-structured, in the most primitive way.Decision Table Languages: Decision tables can be used as an aid to clarifying the logic before writing a program in any language, but in the 1960s a number of languages were developed where the main logic is expressed directly in the form of a decision table, including:FiletabDeclarative Languages: Declarative languages describe a problem rather than defining a solution. Declarative programming stands in contrast to imperative programming via imperative programming languages, where serial orders (imperatives) are given to a computer. In addition to the examples given just below, all (pure) functional and logic-based programming languages are also declarative. In fact, "functional" and "logical" constitute the usual subcategories of the declarative category.AnalyticaAnt (partially declarative languages, partially imperative programming)DASL (partially declarative languages, partially imperative programming)LustreMercuryMetaPostModelicaPrologOzRDQLSequenceL – purely functional, automatically parallelizing and race-freeSPARQLSQLWolfram LanguagexBaseXSL TransformationsEmbeddable Languages: Source embeddable languages embed small pieces of executable code inside a piece of free-form text, often a web page. Client-side embedded languages are limited by the abilities of the browser or intended client. They aim to provide dynamism to web pages without the need to recontact the server. Server-side embedded languages are much more flexible, since almost any language can be built into a server. The aim of having fragments of server-side code embedded in a web page is to generate additional markup dynamically; the code itself disappears when the page is served, to be replaced by its output.a. Server sidePHPVBScriptSMX – dedicated to web pagesWebDNA – dedicated to database-driven websitesThe above examples are particularly dedicated to this purpose. A large number of other languages, such as Erlang, Scala, Perl and Ruby can be adapted (for instance, by being made into Apache modules).b. Client sideActionScriptJavaJavaScriptECMAScriptJScriptVBScript (Windows only)c. In object codeA wide variety of dynamic or scripting languages can be embedded in compiled executable code. Basically, object code for the language's interpreter needs to be linked into the executable. Source code fragments for the embedded language can then be passed to an evaluation function as strings. Application control languages can be implemented this way, if the source code is input by the user. Languages with small interpreters are preferred.AngelScriptChEELIoJuliaLuaMiniDPythonRuby (via mruby)SquirrelTclEducational Languages: Languages developed primarily for the purpose of teaching and learning of programming.AliceBlocklyBlueCOMALElanLogoKTurtleModula-2PascalSchemeScratchSnap!TuringWolfram LanguageEsoteric Languages: An esoteric programming language is a programming language designed as a test of the boundaries of computer programming language design, as a proof of concept, or as a joke.BeatnikBefungeBrainfuckChefINTERCALLOLCODEMalbolgePietShakespeareWhitespaceExtension Languages: Extension programming languages are languages embedded into another program and used to harness its features in extension scripts.Ateji PX – an extension of the Java language for parallelismAutoLISP (specific to AutoCAD)BeanShellCALC/AL(C/SIDE)GuileEmacs LispJavaScript and some dialects (e.g. JScript)Lua – e.g. embedded in many gamesOpenCL – an extension of C and C++ to use the GPU and parallel extensions of the CPUOptimJ – an extension of the Java programming language with language support for writing optimization models and powerful abstractions for bulk data processingPerlPikePython (embedded in Maya, Blender and other 3-D animation packages)REXXRuby (Google SketchUp)S-LangSQLSquirrelTclVim scriptVBAWindows PowerShellFourth-Generation Languages: Fourth-generation programming languages are high-level languages built around database systems. They are generally used in commercial environments.ABAPCorVisionCSC's GraphTalkEasytrieve report generator (now CA-Easytrieve Plus)FOCUSIBM Informix-4GL / Aubit-4GLLINC 4GLMAPPER (Unisys/Sperry) – now part of BISMARK-IV (Sterling/Informatics) now VISION:BUILDER of CAProgress 4GLRevolution (not based on a database; still, the goal is to work at a higher level of abstraction than 3GLs)SASUbercode (VHLL, or Very High Level Language)UnifaceVisual DataFlexVisual FoxProxBaseFunctional Languages: Functional programming languages define programs and subroutines as mathematical functions. Many so-called functional languages are "impure", containing imperative features. Many functional languages are tied to mathematical calculation tools. Functional languages include:a. PureAgdaCharityCleanCoq (Gallina)CurryElmFregeHaskellHopeJoyMercuryMirandaIdrisSequenceLb. ImpureAPLATSCALC++ (since C++11)C#CeylonDDartCurlECMAScriptActionScriptECMAScript for XMLJavaScriptJScriptErlangElixirLFEF#FPrGroovyHopJJava (since version 8)JuliaLispClojureCommon LispDylanEmacs LispLFELittle bLogoSchemeRacket (formerly PLT Scheme)TeaMathematicaMLStandard ML (SML)AliceOCamlNemerleOpalOPS5PoplogPythonQ (equational programming language)Q (programming language from Kx Systems)RRubyREFALRustScalaSpreadsheetsWolfram LanguageHardware Description Languages: In electronics, a Hardware description language or HDL is a specialized computer language used to describe the structure, design and operation of electronic circuits, and most commonly, digital logic circuits. The two most widely used and well-supported HDL varieties used in industry are Verilog and VHDL. Hardware description languages include:HDLs for analog circuit design[edit]Verilog-AMS (Verilog for Analog and Mixed-Signal)VHDL-AMS (VHDL with Analog/Mixed-Signal extension)HDLs for digital circuit design[edit]Advanced Boolean Expression Language(ABEL)Altera Hardware Description Language(AHDL)BluespecConfluenceELLAWelcome | EvolveHandel-CHHDLImpulse CJHDLLavaLolaMMyHDLPALASMRuby (hardware description language)SystemCSystemVerilogVerilogVHDL (VHSIC HDL)Imperative Languages: Imperative programming languages may be multi-paradigm and appear in other classifications. Here is a list of programming languages that follow the imperative paradigm:AdaALGOLBASICBlueCC++C#CeylonCOBOLDeCFORTRANGAUSSGoGroovyJavaJuliaLuaMATLABMachine languageModula-2, Modula-3MUMPSNimOberonObject PascalOCamlPascalPerlPHPPROSEPythonRubyRustWolfram LanguageInteractive Mode Languages: Interactive mode languages act as a kind of shell: expressions or statements can be entered one at a time, and the result of their evaluation is seen immediately. The interactive mode is also known as a REPL (read–eval–print loop).APLBASIC (some dialects)ClojureCommon LispDart (with Observatory or Dartium's developer tools)ErlangF#ForthFPrFrilGAUSSGroovyHaskell (with the GHCi or Hugs interpreter)IDLJJavaScript (using command line tools like Node.js or Rhino or the developer tools built into web browsers like Firefox or Chrome)JuliaLuaMUMPS (an ANSI standard general purpose language)MapleMathematica (Wolfram language)MATLABMLPerlPikePostScriptPythonPROSERREXXRuby (with IRB)ScalaSchemeSmalltalk (anywhere in a Smalltalk environment)S-Lang (with the S-Lang shell, slsh)Tcl (with the Tcl shell, tclsh)Windows PowerShell (Microsoft .NET-based CLI)Visual FoxPro (Microsoft)Interpreted Languages: Interpreted languages are programming languages in which programs may be executed from source code form, by an interpreter. Theoretically, any language can be compiled or interpreted, so the term *interpreted language* generally refers to languages that are commonly interpreted rather than compiled.AntAPLAutoHotkey scripting languageAutoIt scripting languageBASIC (some dialects)DATABUS (later versions added optional compiling)DMEiffel (via "Melting Ice Technology" in EiffelStudio)Forth (interactive shell only; otherwise compiled to native or threaded code)FPr (Virtual machine: Text is compiled to linked lists; linked lists are interpreted)Game Maker LanguageGroovyHaskell (GHCi, Hugs, NHC, YHC etc.)JJulia (Compiled on the fly to machine code, but a transpiler Julia2C is also available.)JavaScriptLisp (Early versions, pre-1962, and some experimental ones; production Lisp systems are compilers, but many of them still provide an interpreter if needed.)TeaLPCLuaMUMPS (an ANSI standard general purpose language)MapleMathematicaMATLABOrielPascal (early implementations)PCASTLPerlPHPPiktPostScriptPROSEPythonREXXRRubyS-LangStandard ML (SML)SpinTclTI-BASICTorqueScriptthinBasic scripting languageVBScriptWindows PowerShell (Microsoft .NET-based CLI)Wolfram LanguageIterative Languages: Iterative languages are built around or offering generators.AldorAlphardC#CLUCobraEiffel, through "agents"IconIPL-vJuliaLuaPHPPythonSatherXL ("iterator" construct)List-based languages – LISPs: List-based languages are a type of data-structured language that are based upon the list data structure.LispArcClojureCommon LispDylanEmacs LispRacketSchemeLogoFPrJoyRTclTeaTRACLittle Languages: Little languages serve a specialized problem domain.awk – can serve as a prototyping language for C (shares similar syntax)Comet – used to solve complex combinatorial optimization problems in areas such as resource allocation and schedulingSQL – has only a few keywords, and not all the constructs needed for a full programming language – many database management systems extend SQL with additional constructs as a stored procedure languageLogic-based Languages: Logic-based languages specify a set of attributes that a solution must have, rather than a set of steps to obtain a solution. Examples:ALFAlma-0CLACL (CLAC-Language)CurryFrilJanusλProlog (a logic programming language featuring polymorphic typing, modular programming, and higher-order programming)LedaOz, and Mozart Programming System cross-platform OzPoplogProlog (formulates data and the program evaluation mechanism as a special form of mathematical logic called Horn logic and a general proving mechanism called logical resolution)Mercury (based on Prolog)Strawberry Prolog (standard Prolog with some extensions)Visual Prolog (object-oriented Prolog extension)ROOPMachine Languages: Machine languages are directly executable by a computer's CPU. They are typically formulated as bit patterns, usually represented in octal or hexadecimal. Each bit pattern causes the circuits in the CPU to execute one of the fundamental operations of the hardware. The activation of specific electrical inputs (e.g., CPU package pins for microprocessors), and logical settings for CPU state values, control the processor's computation. Individual machine languages are specific to a family of processors; machine-language code for one family of processors cannot run directly on processors in another family unless the processors in question have additional hardware to support it (for example, DEC VAX processors included a PDP-11 compatibility mode). They are (essentially) always defined by the CPU developer, not by 3rd parties. The symbolic version, the processor's assembly language, is also defined by the developer, in most cases. Some commonly used machine code instruction sets are:ARMOriginal 32-bit16-bit Thumb instructions (subset or registers used)64-bit (major architecture change, more registers)DEC PDP-6/PDP-10/DECSYSTEM-20DEC PDP-11 (influenced VAX and M68000)DEC VAXDEC AlphaIntel 8008, 8080 and 8085Zilog Z80x86:16-bit x86, first used in the Intel 8086Intel 8086 and 8088 (the latter was used in the first and early IBM PC)Intel 80186Intel 80286 (the first x86 processor with protected mode, used in the IBM AT)IA-32, introduced in the 80386x86-64 The original specification was created by AMD. There are vendor variants, but they're essentially the same:AMD's AMD64Intel's Intel 64IBM System/360 and successors, including z/ArchitectureMIPSMotorola 6800Motorola 68000 family (CPUs used in early Apple Macintosh and early Sun computers)MOS Technology 65xx6502 (CPU for VIC-20, Apple II, and Atari 800)6510 (CPU for Commodore 64)Western Design Center 65816/65802 (CPU for Apple IIGS and (variant) Super Nintendo Entertainment System)National NS320xxPower ArchitecturePOWER, first used in the IBM RS/6000PowerPC – used in Power Macintosh and the technology is used in many older generation game consolesSun,Oracle SPARCMCST Elbrus 2000Macro Languages: Macro languages transform one source code file into another. A "macro" is essentially a short piece of text that expands into a longer one (not too be confused with hygienic macros), possibly with parameter substitution. They are often used to preprocess source code. Preprocessors can also supply facilities like file inclusion. Macro languages may be restricted to acting on specially labeled code regions (pre-fixed with a # in the case of the C preprocessor). Alternatively, they may not, but in this case it is still often undesirable to (for instance) expand a macro embedded in a string literal, so they still need a rudimentary awareness of syntax. That being the case, they are often still applicable to more than one language. Contrast with source-embeddable languages like PHP, which are fully featured.cpp (the C preprocessor)m4 (originally from AT&T, bundled with Unix)Application Macro Languages: Scripting languages such as Tcl and ECMAScript (ActionScript, ECMAScript for XML, JavaScript, JScript) have been embedded into applications. These are sometimes called "macro languages", although in a somewhat different sense to textual-substitution macros like m4.Metaprogramming Languages: Metaprogramming is writing of programs that write or manipulate other programs (or themselves) as their data or that do part of the work that is otherwise done at run time during compile time. In many cases, this allows programmers to get more done in the same amount of time as they would take to write all the code manually.C++CurlDeCElixirForthGroovyHaskellJuliaLispLuaMaude systemMathematicaMetaOCamlNemerlePerlPythonRubyRustSchemeSequenceLSmalltalkWolfram LanguageXL (concept programming)Multiparadigm Languages: Multiparadigm languages support more than one programming paradigm. They allow a program to use more than one programming style. The goal is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way.Ada (concurrent, distributed, generic (template metaprogramming), imperative, object-oriented (class-based))ALF (functional, logic)Alma-0 (constraint, imperative, logic)APL (functional, imperative)BETA (functional, imperative, object-oriented (class-based))C++ (generic, imperative, object-oriented (class-based), functional)C# (generic, imperative, object-oriented (class-based), functional, declarative)Ceylon (generic, imperative, object-oriented (class-based), functional, declarative)ChucK (imperative, object-oriented, time-based, concurrent, on-the-fly)Cobra (generic, imperative, object-oriented (class-based), functional, contractual)Common Lisp (functional, imperative, object-oriented (class-based), aspect-oriented (user may add further paradigms, e.g., logic))Curl (functional, imperative, object-oriented (class-based), metaprogramming)Curry (concurrent, functional, logic)D (generic, imperative, functional, object-oriented (class-based), metaprogramming)Delphi (generic, imperative, object-oriented (class-based), metaprogramming)Dylan (functional, object-oriented (class-based))eC (generic, imperative, object-oriented (class-based))ECMAScript (functional, imperative, object-oriented (prototype-based))ActionScriptECMAScript for XMLJavaScriptJScriptEiffel (imperative, object-oriented (class-based), generic, functional (agents), concurrent (SCOOP))F# (functional, generic, object-oriented (class-based), language-oriented)Fantom (functional, object-oriented (class-based))FPr (function-level, object-oriented (class-based))Go (functional, object-oriented (class-based), imperative, procedural),Groovy (functional, object-oriented (class-based),imperative,procedural)HarbourHopJ (functional, imperative, object-oriented (class-based))Julia (imperative, multiple dispatch ("object-oriented"), functional, metaprogramming)LabVIEW (dataflow, visual)Lava (object-oriented (class-based), visual)Leda (functional, imperative, logic, object-oriented (class-based))Lua (functional, imperative, object-oriented (prototype-based))Mercury (functional, logical, object-oriented)Metaobject protocols (object-oriented (class-based, prototype-based))Nemerle (functional, object-oriented (class-based), imperative, metaprogramming)Objective-C (imperative, object-oriented (class-based), reflective)OCaml (functional, imperative, object-oriented (class-based))Oz (functional (evaluation: eager, lazy), logic, constraint, imperative, object-oriented (class-based), concurrent, distributed), and Mozart Programming System cross-platform OzObject Pascal (imperative, object-oriented (class-based))Perl (imperative, functional (can't be purely functional), object-oriented, class-oriented, aspect-oriented (through modules))PHP (imperative, object-oriented)PikePoplog (functional, imperative, logic)Prograph (dataflow, object-oriented (class-based), visual)Python (functional, compiled, interpreted, object-oriented (class-based), imperative, metaprogramming, extension, impure, interactive mode, iterative, reflective, scripting)R (array, interpreted, impure, interactive mode, list-based, object-oriented prototype-based, scripting)Racket (functional, imperative, object-oriented (class-based) and can be extended by the user)REBOL (functional, imperative, object-oriented (prototype-based), metaprogramming (dialected))ROOP (imperative, logic, object-oriented (class-based), rule-based)Ruby (imperative, functional, object-oriented (class-based), metaprogramming)Rust (concurrent, functional, imperative, object-oriented)Scala (functional, object-oriented)Seed7 (imperative, object-oriented, generic)SISAL (concurrent, dataflow, functional)Spreadsheets (functional, visual)Tcl (functional, imperative, object-oriented (class-based))Tea (functional, imperative, object-oriented (class-based))Windows PowerShell (functional, imperative, pipeline, object-oriented (class-based))Wolfram LanguageXL (concept programming approach)Numerical Analysis:AIMMSAMPLAnalyticaGAUSSGAMSJuliaKlerer-May SystemMathematicaMATLABPROSESeneca – an Oberon variantWolfram LanguageNon-English-based Languages:ARLOGO – ArabicChinese BASIC – ChineseFjölnir – IcelandicLanguage Symbolique d'Enseignement – FrenchLexico – SpanishRapira – RussianObject-oriented Class-based Languages: Class-based Object-oriented programming languages support objects defined by their class. Class definitions include member data. Message passing is a key concept (if not the key concept) in Object-oriented languages. Polymorphic functions parameterized by the class of some of their arguments are typically called methods. In languages with single dispatch, classes typically also include method definitions. In languages with multiple dispatch, methods are defined by generic functions. There are exceptions where single dispatch methods are generic functions (e.g. Bigloo's object system).a. Multiple dispatchCommon LispCecilDylanJuliab. Single dispatchActionScript 3.0ActorAda 95 and Ada 2005 (multi-purpose language)BETABlueC++C#CeylonOxygene (formerly known as Chrome)ChucKCobraColdFusionCurlDDASLDelphiEGNU EeCEiffelSatherUbercodeF-ScriptFortran 2003FortressFPrGambasGame Maker LanguageHarbourJJavaProcessingGroovyJoin JavaTeaX10LabVIEWLavaLuaModula-2 (data abstraction, information hiding, strong typing, full modularity)Modula-3 (added more object-oriented features to Modula-2)NemerleIBM NetRexxOberon-2 (full object-orientation equivalence in an original, strongly typed, Wirthian manner)Object PascalObject REXXObjective-C (a superset of C adding a Smalltalk derived object model and message passing syntax)OCamlOz, Mozart Programming SystemPerl 5PHPPikePrographPython (interpretive language, optionally object-oriented)RealbasicRevolution (programmer does not get to pick the objects)RubyScalaSimula (the first object-oriented language, developed by Ole-Johan Dahl and Kristen Nygaard)Smalltalk (pure object-orientation, developed at Xerox PARC)BistroF-ScriptLittle SmalltalkSqueakScratchIBM VisualAgeVisualWorksSPINSuperColliderVBScript (Microsoft Office 'macro scripting' language)Visual DataFlexVisual FoxProVisual PrologX++XOTclObject-oriented Prototype-based Languages: Prototype-based languages are object-oriented languages where the distinction between classes and instances has been removed:ABCL/1ABCL/RABCL/R2ABCL/c plusAgoraCecilECMAScriptActionScriptECMAScript for XMLJavaScript (first named Mocha, then LiveScript)JScriptEtoys in SqueakIoLisaacLuaMOONewtonScriptObliqRREBOLSelf (the first prototype-based language, derived from Smalltalk)TADSOff-side Rule Languages: Off-side rule languages are those where blocks are formed, indicated, by their indentation.ISWIM, the abstract language that introduced the ruleABC, Python's parentPythonCobraBooGenieMiranda, Haskell's parentOrwellHaskellCurryElixirF#NimOccamSPINXLProcedural Languages: Procedural programming languages are based on the concept of the unit and scope (the data viewing range of an executable code statement). A procedural program is composed of one or more units or modules, either user coded or provided in a code library; each module is composed of one or more procedures, also called a function, routine, subroutine, or method, depending on the language. Examples of procedural languages include:Ada (multi-purpose language)ALGOL (extremely influential language design – the second high level language compiler)SMALL Machine Algol Like LanguageAlma-0BASIC (BASICs are innocent of most modularity in (especially) versions before about 1990)BCPLBLISSBlueCC++ (C with objects plus much else, such as, generics through STL)C# (similar to Java/C++)CeylonChucK (C/Java-like syntax, with new syntax elements for time and parallelism)COBOLCobraColdFusionCombined Programming Language (CPL)CurlDDASL (partly declarative, partly imperative)eCECMAScriptActionScriptECMAScript for XMLJavaScript (first named Mocha, then LiveScript)JScriptEiffelFortran (better modularity in later Standards)FGAUSSGoHarbourHyperTalkJavaGroovyJoin JavaTeaJOVIALJuliaLassoModula-2 (fundamentally based on modules)MathematicaMATLABMUMPS (More modular in its first release than a language of the time should have been; The standard has become still more modular since then.)NemerleOberon and Oberon-2 (improved, smaller, faster, safer follow-ons for Modula-2)Component PascalLagoonaSenecaOccamOrielPascal (successor to ALGOL 60, predecessor of Modula-2)Free Pascal (FPC)Object Pascal (Delphi)PCASTLPerlPikePL/CPL/I (large general purpose language, originally for IBM mainframes)PlusPROSEPythonRRapiraRPG (available only in IBM's System i midrange computers)RustS-LangVBScriptVisual BasicVisual FoxProWolfram LanguageX++X#XLReflective Languages: Reflective languages let programs examine and possibly modify their high level structure at runtime. This is most common in high-level virtual machine programming languages like Smalltalk, and less common in lower-level programming languages like C. Languages and platforms supporting reflection:BefungeC#CeylonCharmChucKCobraComponent Pascal BlackBox Component BuilderCurlDelphieCECMAScriptActionScriptECMAScript for XMLJavaScriptJScriptEiffelForthHarbourJavaJava virtual machineGroovyJoin JavaX10JuliaLispClojureCommon LispDylanLogoSchemeLuaMaude system.NET Framework Common Language RuntimeOberon-2 – ETH Oberon SystemObjective-CPCASTLPerlPHPPicoPoplogPOP-11PrologPythonREBOLRubySmalltalk (pure object-orientation, originally from Xerox PARC)BistroF-ScriptLittle SmalltalkSelfSqueakIBM VisualAgeVisualWorksSnobolTclWolfram LanguageXOTclX++XLRule-based Languages: Rule-based languages instantiate rules when activated by conditions in a set of data. Of all possible activations, some set is selected and the statements belonging to those rules execute. Rule-based languages include:awkCLIPSConstraint Handling RulesDroolsGOAL agent programming languageJessOPS5PrologToonTalk – robots are rulesMathematicaXSLT[citation needed]Wolfram LanguageScripting Languages: "Scripting language" has two apparently different, but in fact similar meanings. In a traditional sense, scripting languages are designed to automate frequently used tasks that usually involve calling or passing commands to external programs. Many complex application programs provide built-in languages that let users automate tasks. Those that are interpretive are often called scripting languages. Recently, many applications have built-in traditional scripting languages, such as Perl or Visual Basic, but there are quite a few native scripting languages still in use. Many scripting languages are compiled to byte code and then this (usually) platform-independent byte code is run through a virtual machine (compare to Java virtual machine).AppleScriptAWKBeanShellBashCh (Embeddable C/C++ interpreter)CLISTColdFusionECMAScriptActionScriptECMAScript for XMLJavaScript (first named Mocha, then LiveScript)JScriptCMS EXECEXEC 2F-ScriptFalconGame Maker Language (GML)ICIIoJASSGroovyJoin JavaJulia (still, compiled on the fly to machine code)LassoLuaMAXScriptMELOrielPerlPHP (intended for Web servers)PiktPythonRREBOLREXXRevolutionRubySmalltalkS-LangsedTeaTclTorqueScriptVBScriptWebDNA, dedicated to database-driven websitesWindows PowerShell (Microsoft .NET-based CLI)WinbatchMany shell command languages such as the Unix shell or DCL on VMS have powerful scripting abilities.Stack-based Languages: Stack-based languages are a type of data-structured language that are based upon the stack data structure.BeatnikcolorForthFactorForthJoy (all functions work on parameter stacks instead of named parameters)PietPoplog via its implementation language POP-11PostScriptRPLSynchronous Languages: Synchronous programming languages are optimized for programming reactive systems, systems that are often interrupted and must respond quickly. Many such systems are also called real time systems, and are used often in embedded systems. Examples:ArgusAverestEsterelLustreSyncChartsSyntax Handling Languages: These languages assist with generating lexical analyzers and parsers for Context-free grammars.ANTLRCoco/R (EBNF with semantics)GNU bison (FSF's version of Yacc)GNU Flex (FSF's version of Lex)glex/gyacc (GoboSoft compiler compiler to Eiffel)lex (Lexical Analysis, from Bell Labs)M4yacc (yet another compiler compiler, from Bell Labs)JavaCCTransformation Languages:ATLAWKMOFM2TQVTXML languagesVisual Languages: Visual programming languages let users specify programs in a two-(or more)-dimensional way, instead of as one-dimensional text strings, via graphic layouts of various types.AnalyticaBlocklyCODEDRAKONFabrikG (used in LabVIEW)LavaLimnorMaxNXT-GPict programming languagePrographPure DataQuartz ComposerScratch (written in and based on Squeak, a version of Smalltalk)Snap!SimulinkSpreadsheetsSubtextToonTalkVEEVisSimvvvvEICASLABSome dataflow programming languages are also visual languages.Wirth Languages: Computer scientist Niklaus Wirth designed and implemented several influential languages.ALGOL WEulerModulaModula-2 (and Modula 3, etc. variants)Obliq Modula 3 variantOberon (Oberon, Oberon-07, and Oberon-2)Component PascalLagoonaOberon-2PascalObject Pascal ("umbrella" name for Delphi, Free Pascal, Oxygene and others)XML-based Languages: These are languages based on or that operate on XML.AntCωECMAScript for XMLMXMLLZXXAMLXPathXQueryXProcXSLTHope you are happy with my categorization…Enjoy Learning!!!

People Like Us

This was extremely easy to set up. It was welcomed by our customers and they were quickly able to adopt it and understand how to complete the paperwork.

Justin Miller