Computer program
{{Short description|Instructions a computer can execute}} {{For|the TV program|The Computer Programme}} [[File:JavaScript_code.png|thumb|[[Source code]] for a computer program written in the [[JavaScript]] language. It demonstrates the ''appendChild'' method. The method adds a new child node to an existing parent node. It is commonly used to dynamically modify the structure of an HTML document.]] {{Program execution}}
A '''computer program''' is a [[sequence]] or set{{efn|The [[Prolog]] language allows for a database of facts and rules to be entered in any order. However, a question about a database must be at the very end.}} of instructions in a [[programming language]] for a [[computer]] to [[Execution (computing)|execute]]. It is one component of [[software]], which also includes [[software documentation|documentation]] and other intangible components.{{cite web | title=ISO/IEC 2382:2015 | website=ISO | date=2020-09-03 | url=https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:ed-1:v1:en | access-date=2022-05-26 | quote=[Software includes] all or part of the programs, procedures, rules, and associated documentation of an information processing system. | archive-date=2016-06-17 | archive-url=https://web.archive.org/web/20160617031837/https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:ed-1:v1:en | url-status=live }}
A ''computer program'' in its [[human-readable]] form is called [[source code]]. Source code needs another computer program to execute because computers can only execute their native [[machine instructions]]. Therefore, source code may be [[Translator (computing)|translated]] to machine instructions using a [[compiler]] written for the language. ([[Assembly language]] programs are translated using an [[Assembler (computing)|assembler]].) The resulting file is called an [[executable]]. Alternatively, source code may execute within an [[interpreter (computing)|interpreter]] written for the language.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 7 | quote = An alternative to compiling a source program is to use an interpreter. An interpreter can directly execute a source program[.] | isbn = 0-201-71012-9 }}
If the executable is requested for execution,{{efn|Either the user or another program makes the request.}} then the [[operating system]] [[Loader (computing)|loads]] it into [[Random-access memory|memory]]{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |quote = The kernel can load a new program into memory[.] |page=22}} and starts a [[Process (computing)|process]].{{cite book | last = Silberschatz | first = Abraham | title = Operating System Concepts, Fourth Edition | publisher = Addison-Wesley | year = 1994 | page = 98 | quote = Informally, a process is a program in execution. | isbn = 978-0-201-50480-4 }} The [[central processing unit]] will soon [[Context switch|switch]] to this process so it can [[Instruction cycle|fetch, decode, and then execute]] each machine instruction.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/32 32] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/32 }}
If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each [[Statement (computer science)|statement]]. Running the source code is slower than running an executable.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 7 | isbn = 0-201-71012-9 }}{{efn|An executable has each [[machine instruction]] ready for the [[CPU]].}} Moreover, the interpreter must be installed on the computer.
==Example computer program==
The [["Hello, World!" program]] is used to illustrate a language's basic syntax. The syntax of the language [[Dartmouth BASIC|BASIC]] (1964) was intentionally limited to make the language easy to learn.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 30 | isbn = 0-201-71012-9 | quote = Their intention was to produce a language that was very simple for students to learn[.] }} For example, [[Variable (computer science)|variables]] are not [[Declaration (computer programming)|declared]] before being used.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 31 | isbn = 0-201-71012-9 }} Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to [[average]] a list of numbers:{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 30 | isbn = 0-201-71012-9 }} 10 INPUT "How many numbers to average?", A 20 FOR I = 1 TO A 30 INPUT "Enter number:", B 40 LET C = C + B 50 NEXT I 60 LET D = C/A 70 PRINT "The average is", D 80 END
Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 30 | isbn = 0-201-71012-9 | quote = The idea was that students could be merely casual users or go on from Basic to more sophisticated and powerful languages[.] }}
==History== {{See also|Computer programming#History|Programmer#History|History of computing|History of programming languages|History of software}}
Improvements in [[software development]] are the result of improvements in [[computer hardware]]. At each stage in hardware's history, the task of [[computer programming]] changed dramatically.
===Analytical Engine=== [[File:Diagram for the computation of Bernoulli numbers.jpg|thumb|right|Lovelace's description from Note G]] In 1837, [[Jacquard machine|Jacquard's loom]] inspired [[Charles Babbage]] to attempt to build the [[Analytical Engine]].{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/16 16] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/16 }} The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a ''store'' which consisted of memory to hold 1,000 numbers of 50 decimal digits each.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/14 14] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/14 }} Numbers from the ''store'' were transferred to the ''mill'' for processing. The engine was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables.{{cite journal | first = Allan G. | last = Bromley | author-link = Allan G. Bromley | year = 1998 | url = https://profs.scienze.univr.it/~manca/storia-informatica/babbage.pdf | title = Charles Babbage's Analytical Engine, 1838 | journal = [[IEEE Annals of the History of Computing]] | volume = 20 | number = 4 | pages = 29–45 | doi = 10.1109/85.728228 | bibcode = 1998IAHC...20d..29B | s2cid = 2285332 | access-date = 2015-10-30 | archive-date = 2016-03-04 | archive-url = https://web.archive.org/web/20160304081812/http://profs.scienze.univr.it/~manca/storia-informatica/babbage.pdf | url-status = live }} However, the thousands of cogged wheels and gears never fully worked together.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/15 15] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/15 }}
[[Ada Lovelace]] worked for Charles Babbage to create a description of the Analytical Engine (1843).{{citation |author1 = J. Fuegi |author2 =J. Francis |title = Lovelace & Babbage and the creation of the 1843 'notes' |journal = Annals of the History of Computing |volume = 25 |issue = 4 |date=October–December 2003 |doi = 10.1109/MAHC.2003.1253887 |pages = 16, 19, 25 |bibcode =2003IAHC...25d..16F }} The description contained Note G which completely detailed a method for calculating [[Bernoulli number]]s using the Analytical Engine. This note is recognized by some historians as the world's first ''computer program''.
===Universal Turing machine=== [[File:Universal Turing machine.svg|350px|right]] In 1936, [[Alan Turing]] introduced the [[Universal Turing machine]], a theoretical device that can model every computation.{{cite book | last = Rosen | first = Kenneth H. | title = Discrete Mathematics and Its Applications | publisher = McGraw-Hill, Inc. | year = 1991 | page = [https://archive.org/details/discretemathemat00rose/page/654 654] | isbn = 978-0-07-053744-6 | url = https://archive.org/details/discretemathemat00rose/page/654 | quote = Turing machines can model all the computations that can be performed on a computing machine. }} It is a [[finite-state machine]] that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an [[algorithm]]. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state.{{cite book | last = Linz | first = Peter | title = An Introduction to Formal Languages and Automata | publisher = D. C. Heath and Company | year = 1990 | page = 234 | isbn = 978-0-669-17342-0 }} All present-day computers are [[Turing complete]].{{cite book | last = Linz | first = Peter | title = An Introduction to Formal Languages and Automata | publisher = D. C. Heath and Company | year = 1990 | page = 243 | isbn = 978-0-669-17342-0 | quote = [A]ll the common mathematical functions, no matter how complicated, are Turing-computable. }}
===ENIAC=== [[File:ENIAC-changing_a_tube.jpg|thumb|right|Glenn A. Beck changing a tube in ENIAC]] The [[Electronic Numerical Integrator And Computer]] (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 [[vacuum tube]]s to create the [[Electronic circuit|circuits]]. At its core, it was a series of [[Pascaline]]s wired together.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/102 102] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/102 }} Its 40 units weighed 30 tons, occupied {{convert|1,800|sqft|m2|0}}, and consumed $650 per hour ([[Inflation|in 1940s currency]]) in electricity when idle. It had 20 [[base-10]] [[Accumulator (computing)|accumulators]]. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into [[plugboard]]s. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/94 94] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/94 }} It ran from 1947 until 1955 at [[Aberdeen Proving Ground]], calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/107 107] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/107 }}
===Stored-program computers=== Instead of plugging in cords and turning switches, a [[stored-program computer]] loads its instructions into [[Random-access memory|memory]] just like it loads its data into memory.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/120 120] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/120 }} As a result, the computer could be programmed quickly and perform calculations at very fast speeds.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/118 118] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/118 }} [[Presper Eckert]] and [[John Mauchly]] built the ENIAC. The two engineers introduced the ''stored-program concept'' in a three-page memo dated February 1944.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/119 119] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/119 }} Later, in September 1944, [[John von Neumann]] began working on the ENIAC project. On June 30, 1945, von Neumann published the ''[[First Draft of a Report on the EDVAC]]'', which equated the structures of the computer with the structures of the human brain. The design became known as the [[von Neumann architecture]]. The architecture was simultaneously deployed in the constructions of the [[EDVAC]] and [[EDSAC]] computers in 1949.{{cite book | last = McCartney | first = Scott | title = ENIAC – The Triumphs and Tragedies of the World's First Computer | publisher = Walker and Company | year = 1999 | page = [https://archive.org/details/eniac00scot/page/123 123] | isbn = 978-0-8027-1348-3 | url = https://archive.org/details/eniac00scot/page/123 }}{{Citation |last=Huskey |first=Harry D. |title=EDVAC |date=2003-01-01 |encyclopedia=Encyclopedia of Computer Science |pages=626–628 |url=https://dl.acm.org/doi/10.5555/1074100.1074362 |access-date=2025-04-25 |place=GBR |publisher=John Wiley and Sons Ltd. |isbn=978-0-470-86412-8}}
The [[IBM System/360]] (1964) was a family of computers, each having the same [[instruction set architecture]]. The [[IBM System/360 Model 20|Model 20]] was the smallest and least expensive. Customers could upgrade and retain the same [[application software]].{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/n42 21] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane | url-access = registration }} The [[IBM System/360 Model 195|Model 195]] was the most premium. Each System/360 model featured [[multiprogramming]]—having multiple [[Process (computing)|processes]] in [[random-access memory|memory]] at once. When one process was waiting for [[input/output]], another could compute.
IBM planned for each model to be programmed using [[PL/I]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 27 | isbn = 0-201-71012-9 }} A committee was formed that included [[COBOL]], [[FORTRAN]] and [[ALGOL]] programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace COBOL and FORTRAN. The result was a large and complex language that took a long time to [[compile]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 29 | isbn = 0-201-71012-9 }}
[[File:Dg-nova3.jpg|thumb|Switches for manual input on a [[Data General Nova]] 3, manufactured in the mid-1970s]] Computers manufactured until the 1970s had front-panel switches for manual programming.{{cite book | last = Silberschatz | first = Abraham | title = Operating System Concepts, Fourth Edition | publisher = Addison-Wesley | year = 1994 | page = 6 | isbn = 978-0-201-50480-4 }} The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via [[paper tape]], [[punched cards]] or [[9-track tape|magnetic-tape]]. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.
===Very Large Scale Integration=== [[Image:Diopsis.jpg|thumb|right|A VLSI integrated-circuit [[die (integrated circuit)|die]] ]] A major milestone in software development was the invention of the [[Very Large Scale Integration]] (VLSI) circuit (1964).
[[Robert Noyce]], co-founder of [[Fairchild Semiconductor]] (1957) and [[Intel]] (1968), achieved a technological improvement to refine the [[Semiconductor device fabrication|production]] of [[field-effect transistor]]s (1963).{{cite book | url=https://books.google.com/books?id=UUbB3d2UnaAC&pg=PA46 | title=To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS | publisher=Johns Hopkins University Press | year=2002 | isbn=9780801886393 | access-date=February 3, 2022 | archive-date=February 2, 2023 | archive-url=https://web.archive.org/web/20230202181649/https://books.google.com/books?id=UUbB3d2UnaAC&pg=PA46 | url-status=live }} The goal is to alter the [[electrical resistivity and conductivity]] of a [[semiconductor junction]]. First, naturally occurring [[silicate minerals]] are converted into [[polysilicon]] rods using the [[Siemens process]].{{cite web | url=https://www.osti.gov/servlets/purl/1497235 | title=Manufacturing of Silicon Materials for Microelectronics and Solar PV | publisher=Sandia National Laboratories | year=2017 | access-date=February 8, 2022 | last1=Chalamala | first1=Babu | archive-date=March 23, 2023 | archive-url=https://web.archive.org/web/20230323163602/https://www.osti.gov/biblio/1497235 | url-status=live }} The [[Czochralski process]] then converts the rods into a [[monocrystalline silicon]], [[Boule (crystal)|boule crystal]].{{cite web | url=https://www.britannica.com/technology/integrated-circuit/Fabricating-ICs#ref837156 | title=Fabricating ICs Making a base wafer | publisher=Britannica | access-date=February 8, 2022 | archive-date=February 8, 2022 | archive-url=https://web.archive.org/web/20220208103132/https://www.britannica.com/technology/integrated-circuit/Fabricating-ICs#ref837156 | url-status=live }} The [[crystal]] is then thinly sliced to form a [[Wafer (electronics)|wafer]] [[Substrate (materials science)|substrate]]. The [[planar process]] of [[photolithography]] then ''integrates'' unipolar transistors, [[capacitor]]s, [[diode]]s, and [[resistor]]s onto the wafer to build a matrix of [[metal–oxide–semiconductor]] (MOS) transistors.{{cite web | author1=Anysilicon | url=https://anysilicon.com/introduction-to-nmos-and-pmos-transistors/ | title=Introduction to NMOS and PMOS Transistors | work=AnySilicon | date=4 November 2021 | access-date=February 5, 2022 | archive-date=6 February 2022 | archive-url=https://web.archive.org/web/20220206051146/https://anysilicon.com/introduction-to-nmos-and-pmos-transistors/ | url-status=live }}{{cite web | url=https://www.britannica.com/technology/microprocessor#ref36149 | title=microprocessor definition | publisher=Britannica | access-date=April 1, 2022 | archive-date=April 1, 2022 | archive-url=https://web.archive.org/web/20220401085141/https://www.britannica.com/technology/microprocessor#ref36149 | url-status=live }} The MOS transistor is the primary component in ''integrated circuit chips''.
Originally, [[integrated circuit]] chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a [[Diode matrix|matrix]] of [[read-only memory]] (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, [[firmware]] programmers wrote a ''computer program'' on another chip to oversee the burning. The technology became known as [[Programmable ROM]]. In 1971, Intel installed the computer program onto the chip and named it the [[Intel 4004]] [[microprocessor]].{{cite web | url=https://spectrum.ieee.org/chip-hall-of-fame-intel-4004-microprocessor | title=Chip Hall of Fame: Intel 4004 Microprocessor | publisher=Institute of Electrical and Electronics Engineers | date=July 2, 2018 | access-date=January 31, 2022 | archive-date=February 7, 2022 | archive-url=https://web.archive.org/web/20220207101915/https://spectrum.ieee.org/chip-hall-of-fame-intel-4004-microprocessor | url-status=live }}
[[Image:Slt1.jpg|thumb|right|IBM's System/360 (1964) CPU was not a microprocessor.]] The terms ''microprocessor'' and [[central processing unit]] (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the [[IBM System/360]] (1964) had a CPU made from [[IBM Solid Logic Technology|circuit boards containing discrete components on ceramic substrates]].{{cite web | url=https://www.computer-museum.ru/books/archiv/ibm36040.pdf |archive-url=https://ghostarchive.org/archive/20221010/https://www.computer-museum.ru/books/archiv/ibm36040.pdf |archive-date=2022-10-10 |url-status=live | title=360 Revolution | publisher=Father, Son & Co. | year=1990 | access-date=February 5, 2022 }}
===x86 series=== [[File:IBM_PC-IMG_7271_(transparent).png|thumb|right|The original [[IBM Personal Computer]] (1981) used an Intel 8088 microprocessor.]] In 1978, the modern software development environment began when Intel upgraded the [[Intel 8080]] to the [[Intel 8086]]. Intel simplified the Intel 8086 to manufacture the cheaper [[Intel 8088]].{{cite web | url=https://books.google.com/books?id=VDAEAAAAMBAJ&pg=PA22 | title=Bill Gates, Microsoft and the IBM Personal Computer | publisher=InfoWorld | date=August 23, 1982 | access-date=1 February 2022 | archive-date=18 February 2023 | archive-url=https://web.archive.org/web/20230218183644/https://books.google.com/books?id=VDAEAAAAMBAJ&pg=PA22 | url-status=live }} [[IBM]] embraced the Intel 8088 when they entered the [[personal computer]] market (1981). As [[consumer]] [[demand]] for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the [[x86|x86 series]]. The [[x86 assembly language]] is a family of [[backward-compatible]] [[machine instruction]]s. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new [[application software]]. The major categories of instructions are:{{efn|For more information, visit [[X86 assembly language#Instruction types]].}}
- Memory instructions to set and access numbers and [[String (computer science)|strings]] in [[random-access memory]].
- Integer [[arithmetic logic unit]] (ALU) instructions to perform the primary arithmetic operations on [[integers]].
- Floating point ALU instructions to perform the primary arithmetic operations on [[real number]]s.
- [[Call stack]] instructions to push and pop [[Word (computer architecture)|words]] needed to allocate memory and interface with [[Function (computer programming)|functions]].
- [[Single instruction, multiple data]] (SIMD) instructions{{efn|introduced in 1999}} to increase speed when multiple processors are available to perform the same [[algorithm]] on an [[Array data structure|array of data]].
===Changing programming environment=== [[File:DEC VT100 terminal transparent.png|thumb|right|The [[Digital Equipment Corporation|DEC]] [[VT100]] (1978) was a widely used [[computer terminal]].]] VLSI circuits enabled the [[programming environment]] to advance from a [[computer terminal]] (until the 1990s) to a [[graphical user interface]] (GUI) computer. Computer terminals limited programmers to a single [[Shell (computing)|shell]] running in a [[command-line interface|command-line environment]]. During the 1970s, full-screen source code editing became possible through a [[text-based user interface]]. Regardless of the technology available, the goal is to program in a [[programming language]].
==Programming paradigms and languages==
Programming language features exist to provide building blocks to be combined to express programming ideals.{{cite book | last = Stroustrup | first = Bjarne | title = The C++ Programming Language, Fourth Edition | publisher = Addison-Wesley | year = 2013 | page = 10 | isbn = 978-0-321-56384-2 }} Ideally, a programming language should:
- express ideas directly in the code.
- express independent ideas independently.
- express relationships among ideas directly in the code.
- combine ideas freely.
- combine ideas only where combinations make sense.
- express simple ideas simply.
The [[programming style]] of a programming language to provide these building blocks may be categorized into [[programming paradigm]]s.{{cite book | last = Stroustrup | first = Bjarne | title = The C++ Programming Language, Fourth Edition | publisher = Addison-Wesley | year = 2013 | page = 11 | isbn = 978-0-321-56384-2 }} For example, different paradigms may differentiate:
- [[Procedural programming|procedural languages]], [[functional language]]s, and [[Logic programming|logical languages]].
- different levels of [[data abstraction]].
- different levels of [[class hierarchy]].
- different levels of input [[datatypes]], as in [[Container (abstract data type)|container types]] and [[generic programming]]. Each of these programming styles has contributed to the synthesis of different ''programming languages''.
A ''programming language'' is a set of [[Reserved word|keywords]], [[Character (computing)|symbols]], [[Identifier (computer languages)|identifiers]], and rules by which programmers can communicate instructions to the computer.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 159 | isbn = 0-619-06489-7 }} They follow a set of rules called a [[Syntax (programming languages)|syntax]].
- ''Keywords'' are reserved words to form [[Declaration (computer programming)|declarations]] and [[Statement (computer science)|statements]].
- ''Symbols'' are characters to form [[Operation (mathematics)|operations]], [[Assignment (computer science)|assignments]], [[control flow]], and [[delimiter]]s.
- ''Identifiers'' are words created by programmers to form [[Constant (computer programming)|constants]], [[Variable (computer science)|variable names]], [[Record (computer science)|structure names]], and [[Function (computer programming)|function names]].
- ''Syntax Rules'' are defined in the [[Backus–Naur form]].
''Programming languages'' get their basis from [[formal language]]s.{{cite book | last = Linz | first = Peter | title = An Introduction to Formal Languages and Automata | publisher = D. C. Heath and Company | year = 1990 | page = 2 | isbn = 978-0-669-17342-0 }} The purpose of defining a solution in terms of its ''formal language'' is to generate an [[algorithm]] to solve the underlining problem. An ''algorithm'' is a sequence of simple instructions that solve a problem.{{cite book | last = Weiss | first = Mark Allen | title = Data Structures and Algorithm Analysis in C++ | publisher = Benjamin/Cummings Publishing Company, Inc. | year = 1994 | page = 29 | isbn = 0-8053-5443-3 }}
===Generations of programming language=== {{Main|Programming language generations}} [[File:W65C816S Machine Code Monitor.jpeg|thumb|[[Machine language]] monitor on a [[W65C816S]] [[microprocessor]] ]] The evolution of programming languages began when the [[EDSAC]] (1949) used the first [[Stored-program computer|stored computer program]] in its [[von Neumann architecture]].{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/17 17] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/17 }} Programming the EDSAC was in the first [[Programming language generations|generation of programming language]].{{Citation |last1=Wilkes |first1=M. V. |title=The EDSAC |date=1982 |work=The Origins of Digital Computers: Selected Papers |pages=417–421 |editor-last=Randell |editor-first=Brian |url=https://link.springer.com/chapter/10.1007/978-3-642-61812-3_34 |access-date=2025-04-25 |place=Berlin, Heidelberg |publisher=Springer |language=en |doi=10.1007/978-3-642-61812-3_34 |isbn=978-3-642-61812-3 |last2=Renwick |first2=W.|url-access=subscription }}
-
The [[first-generation programming language|first generation of programming language]] is [[machine language]].{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 160 | isbn = 0-619-06489-7 }} ''Machine language'' requires the programmer to enter instructions using ''instruction numbers'' called [[machine code]]. For example, the ADD operation on the [[PDP-11 architecture|PDP-11]] has instruction number 24576.{{efn|Whereas this is a decimal number, PDP-11 code is always expressed as [[octal]].}}{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/399 399] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/399 }}
-
The [[second-generation programming language|second generation of programming language]] is [[assembly language]]. ''Assembly language'' allows the programmer to use [[Assembly language#Mnemonics|mnemonic]] [[Instruction_set_architecture#Instructions|instructions]] instead of remembering instruction numbers. An [[Assembler (computing)|assembler]] translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD R0,R0 in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define [[Word (computer architecture)|Word]]) to reserve [[Random-access memory|memory]] cells. Then the MOV instruction can copy [[integer]]s between [[Processor register|registers]] and memory.
:* The basic structure of an assembly language statement is a label, [[Operation (mathematics)|operation]], [[operand]], and comment.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/400 400] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/400 }} ::* ''Labels'' allow the programmer to work with [[Variable (computer science)|variable names]]. The assembler will later translate labels into physical [[memory address]]es. ::* ''Operations'' allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers. ::* ''Operands'' tell the assembler which data the operation will process. ::* ''Comments'' allow the programmer to articulate a narrative because the instructions alone are vague. :: The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Third Edition | publisher = Prentice Hall | year = 1990 | page = [https://archive.org/details/structuredcomput00tane/page/398 398] | isbn = 978-0-13-854662-5 | url = https://archive.org/details/structuredcomput00tane/page/398 }}
-
The [[third-generation programming language|third generation of programming language]] uses [[compiler]]s and [[Interpreter (computing)|interpreters]] to execute computer programs. The distinguishing feature of a ''third generation'' language is its independence from particular hardware.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 26 | isbn = 0-201-71012-9 }} Early languages include [[Fortran|FORTAN]] (1958), [[COBOL]] (1959), [[ALGOL]] (1960), and [[BASIC]] (1964). In 1973, the [[C programming language]] emerged as a [[high-level language]] that produced efficient machine language instructions.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 37 | isbn = 0-201-71012-9 }} Whereas ''third-generation'' languages historically generated many machine instructions for each statement,{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 160 | isbn = 0-619-06489-7 | quote = With third-generation and higher-level programming languages, each statement in the language translates into several instructions in machine language. }} C has statements that may generate a single machine instruction.{{efn|[[Operators in C and C++|Operators]] like x++ will usually compile to a single instruction.}} Moreover, an [[optimizing compiler]] might overrule the programmer and produce fewer machine instructions than statements. Today, an entire [[programming paradigm|paradigm]] of languages fill the [[imperative programming|imperative]], ''third generation'' spectrum.
-
The [[fourth-generation programming language|fourth generation of programming language]] emphasizes what output results are desired, rather than how programming statements should be constructed. [[Declarative language]]s attempt to limit [[Side effect (computer science)|side effects]] and allow programmers to write code with relatively few errors. One popular ''fourth generation'' language is called [[Structured Query Language]] (SQL). [[Database]] developers no longer need to process each database record one at a time. Also, a simple [[Select (SQL)|select statement]] can generate output records without having to understand how they are retrieved.
===Imperative languages=== {{main|Imperative programming}}
[[File:Object-Oriented-Programming-Methods-And-Classes-with-Inheritance.png|thumb|A computer program written in an imperative language]] ''Imperative languages'' specify a sequential [[algorithm#Computer algorithm|algorithm]] using [[Declaration (computer programming)|declarations]], [[Expression (computer science)|expressions]], and [[Statement (computer science)|statements]]:{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Second Edition | publisher = Addison-Wesley | year = 1993 | page = 75 | isbn = 978-0-201-56885-1 }}
- A ''declaration'' introduces a [[variable (programming)|variable]] name to the ''computer program'' and assigns it to a [[datatype]]{{cite book | last = Stroustrup | first = Bjarne | title = The C++ Programming Language, Fourth Edition | publisher = Addison-Wesley | year = 2013 | page = 40 | isbn = 978-0-321-56384-2 }} – for example: var x: integer;
- An ''expression'' yields a value – for example: 2 + 2 yields 4
- A ''statement'' might [[Assignment (computer science)|assign]] an expression to a variable or use the value of a variable to alter the program's [[control flow]] – for example: x := 2 + 2; [[Conditional_(computer_programming)#If–then(–else)|if]] x = 4 then do_something();
====Fortran==== [[FORTRAN]] (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system". It was designed for scientific calculations, without [[String (computer science)|string]] handling facilities. Along with [[Declaration (computer programming)|declarations]], [[Expression (computer science)|expressions]], and [[Statement (computer science)|statements]], it supported:
- [[Array data structure|arrays]].
- [[Function (computer programming)#Jump to subroutine|subroutines]].
- [[For loop#1957: FORTRAN|"do" loops]].
It succeeded because:
- programming and debugging costs were below computer running costs.
- it was supported by IBM.
- applications at the time were scientific.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 16 | isbn = 0-201-71012-9 }}
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The [[American National Standards Institute]] (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:
- [[Record (computer science)|records]].
- [[Pointer (computer programming)|pointers]] to arrays.
====COBOL==== [[COBOL]] (1959) stands for "COmmon Business Oriented Language". Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so [[String (computer science)|strings]] were introduced.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 24 | isbn = 0-201-71012-9 }} The [[US Department of Defense]] influenced COBOL's development, with [[Grace Hopper]] being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 25 | isbn = 0-201-71012-9 }}
COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like [[object-oriented programming]].
====Algol==== [[ALGOL]] (1960) stands for "ALGOrithmic Language". It had a profound influence on programming language design.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 19 | isbn = 0-201-71012-9 }} Emerging from a committee of European and American programming language experts, it used standard [[mathematical notation]] and had a readable, structured design. Algol was first to define its syntax using the [[Backus–Naur form]]. This led to [[Syntax-directed translation|syntax-directed]] compilers. It added features like:
- [[Block (programming)|block structure]], where variables were local to their block.
- arrays with variable bounds.
- [[For loop|"for" loops]].
- [[Function (computer programming)|functions]].
- [[Recursion (computer science)|recursion]].
Algol's direct descendants include [[Pascal (programming language)|Pascal]], [[Modula-2]], [[Ada (programming language)|Ada]], [[Delphi (software)|Delphi]] and [[Oberon (programming language)|Oberon]] on one branch. On another branch the descendants include [[C (programming language)|C]], [[C++]] and [[Java (programming language)|Java]].
====Basic==== [[BASIC]] (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was developed at [[Dartmouth College]] for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the [[microcomputers]] manufactured in the late 1970s. As the microcomputer industry grew, so did the language.
Basic pioneered the [[Read–eval–print loop|interactive session]]. It offered [[operating system]] commands within its environment:
- The 'new' command created an empty slate.
- Statements evaluated immediately.
- Statements could be programmed by preceding them with line numbers.{{efn|The line numbers were typically incremented by 10 to leave room if additional statements were added later.}}
- The 'list' command displayed the program.
- The 'run' command executed the program.
However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. [[Microsoft]]'s [[Visual Basic]] is still widely used and produces a [[graphical user interface]].
====C==== [[C programming language]] (1973) got its name because the language [[BCPL]] was replaced with [[B (programming language)|B]], and [[AT&T Bell Labs]] called the next version "C". Its purpose was to write the [[UNIX]] [[operating system]]. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of [[assembly language]], but it uses a [[High-level programming language|high-level syntax]]. It added advanced features like:
- [[inline assembler]]
- arithmetic on pointers
- pointers to functions
- bit operations
- freely combining complex [[Operators in C and C++|operators]]
[[File:Computer-memory-map.png|thumb|right|Computer memory map]] ''C'' allows the programmer to control which region of memory data is to be stored. [[Global variable]]s and [[static variable]]s require the fewest [[clock cycle]]s to store. The [[call stack|stack]] is automatically used for the standard variable [[Declaration (computer programming)|declarations]]. [[Manual memory management|Heap]] memory is returned to a [[pointer variable]] from the [[C dynamic memory allocation|malloc()]] function.
- The ''global and static data'' region is located just above the ''program'' region. (The program region is technically called the ''text'' region. It is where machine instructions are stored.) :* The global and static data region is technically two regions.{{cite web | url = https://www.geeksforgeeks.org/memory-layout-of-c-program/ | title = Memory Layout of C Programs | date = 12 September 2011 | access-date = 6 November 2021 | archive-date = 6 November 2021 | archive-url = https://web.archive.org/web/20211106175644/https://www.geeksforgeeks.org/memory-layout-of-c-program/ | url-status = live }} One region is called the ''initialized [[data segment]]'', where variables declared with default values are stored. The other region is called the ''[[.bss|block started by segment]]'', where variables declared without default values are stored. :* Variables stored in the ''global and static data'' region have their [[Memory address|addresses]] set at compile time. They retain their values throughout the life of the process.
:* The global and static region stores the ''global variables'' that are declared on top of (outside) the main() function.{{cite book |title=The C Programming Language Second Edition |last1=Kernighan |first1=Brian W. |last2=Ritchie |first2=Dennis M. |publisher=Prentice Hall |year=1988 |isbn=0-13-110362-8 |page=31}} Global variables are visible to main() and every other function in the source code.
: On the other hand, variable declarations inside of main(), other functions, or within { } [[Block (programming)|block delimiters]] are ''local variables''. Local variables also include ''[[formal parameter]] variables''. Parameter variables are enclosed within the parenthesis of a function definition.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 128 | isbn = 0-201-71012-9 }} Parameters provide an [[Interface (computing)|interface]] to the function.
:* ''Local variables'' declared using the static prefix are also stored in the ''global and static data'' region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){static int counter = 0; counter++; return counter;}{{efn|This function could be written more concisely as int increment_counter(){ static int counter; return ++counter;}. 1) Static variables are automatically initialized to zero. 2) ++counter is a prefix [[increment operator]].}}
- The [[call stack|stack]] region is a contiguous block of memory located near the top memory address.{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |page=121}} Variables placed in the stack are populated from top to bottom.{{efn|This is despite the metaphor of a ''stack,'' which normally grows from bottom to top.}} A [[Call stack#STACK-POINTER|stack pointer]] is a special-purpose [[processor register|register]] that keeps track of the last memory address populated. Variables are placed into the stack via the ''assembly language'' PUSH instruction. Therefore, the addresses of these variables are set during [[Runtime (program lifecycle phase)|runtime]]. The method for stack variables to lose their [[Scope (computer science)|scope]] is via the POP instruction.
:* ''Local variables'' declared without the static prefix, including formal parameter variables,{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |page=122}} are called ''automatic variables'' and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block.
- The [[Manual memory management|heap]] region is located below the stack. It is populated from the bottom to the top. The [[operating system]] manages the heap using a ''heap pointer'' and a list of allocated memory blocks.{{cite book |title=The C Programming Language Second Edition |last1=Kernighan |first1=Brian W. |last2=Ritchie |first2=Dennis M. |publisher=Prentice Hall |year=1988 |isbn=0-13-110362-8 |page=185}} Like the stack, the addresses of heap variables are set during runtime. An [[out of memory]] error occurs when the heap pointer and the stack pointer meet.
:* ''C'' provides the malloc() library function to [[C dynamic memory allocation|allocate]] heap memory.{{efn|''C'' also provides the calloc() function to allocate heap memory. It provides two additional services: 1) It allows the programmer to create an [[Array (data structure)|array]] of arbitrary size. 2) It sets each [[Memory cell (computing)|memory cell]] to zero.}}{{cite book |title=The C Programming Language Second Edition |last1=Kernighan |first1=Brian W. |last2=Ritchie |first2=Dennis M. |publisher=Prentice Hall |year=1988 |isbn=0-13-110362-8 |page=187}} Populating the heap with data is an additional copy function.{{efn|For [[String (computer science)|string]] variables, ''C'' provides the strdup() function. It executes both the allocation function and the copy function.}} Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.
====C++==== In the 1970s, [[software engineers]] needed language support to break large projects down into [[Modular programming|modules]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 38 | isbn = 0-201-71012-9 }} One obvious feature was to decompose large projects ''physically'' into separate [[computer file|files]]. A less obvious feature was to decompose large projects ''logically'' into [[abstract data type]]s. At the time, languages supported [[Type system|concrete (scalar)]] datatypes like [[integer]] numbers, [[floating-point]] numbers, and [[String (computer science)|strings]] of [[Character (computing)|characters]]. Abstract datatypes are [[Record (computer science)|structures]] of concrete datatypes, with a new name assigned. For example, a [[List (abstract data type)|list]] of integers could be called integer_list.
In object-oriented jargon, abstract datatypes are called [[Class (programming)|classes]]. However, a ''class'' is only a definition; no memory is allocated. When memory is allocated to a class and [[Name binding|bound]] to an [[identifier]], it is called an [[Object (computer science)|object]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 193 | isbn = 0-201-71012-9 }}
[[Object-oriented programming|Object-oriented imperative languages]] developed by combining the need for classes and the need for safe [[functional programming]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 39 | isbn = 0-201-71012-9 }} A [[Function (computer programming)|function]], in an object-oriented language, is assigned to a class. An assigned function is then referred to as a [[Method (computer programming)|method]], [[member function]], or [[Operation (mathematics)|operation]]. ''Object-oriented programming'' is executing ''operations'' on ''objects''.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 35 | isbn = 0-201-71012-9 }}
''Object-oriented languages'' support a syntax to model [[subset|subset/superset]] relationships. In [[set theory]], an [[Element (mathematics)|element]] of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. ''Object-oriented languages'' model ''subset/superset'' relationships using [[Inheritance (object-oriented programming)|inheritance]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 192 | isbn = 0-201-71012-9 }} ''Object-oriented programming'' became the dominant language paradigm by the late 1990s.
[[C++]] (1985) was originally called "C with Classes".{{cite book | last = Stroustrup | first = Bjarne | title = The C++ Programming Language, Fourth Edition | publisher = Addison-Wesley | year = 2013 | page = 22 | isbn = 978-0-321-56384-2 }} It was designed to expand [[C (programming language)|C's]] capabilities by adding the object-oriented facilities of the language [[Simula]].{{cite book | last = Stroustrup | first = Bjarne | title = The C++ Programming Language, Fourth Edition | publisher = Addison-Wesley | year = 2013 | page = 21 | isbn = 978-0-321-56384-2 }}
An object-oriented module is composed of two files. The definitions file is called the [[header file]]. Here is a C++ ''header file'' for the ''GRADE class'' in a simple school application:
// Used to allow multiple source files to include // this header file without duplication errors. // ---------------------------------------------- #ifndef GRADE_H #define GRADE_H
class GRADE { public: // This is the constructor operation. // ---------------------------------- GRADE ( const char letter );
// This is a class variable.
// -------------------------
char letter;
// This is a member operation.
// ---------------------------
int grade_numeric( const char letter );
// This is a class variable.
// -------------------------
int numeric;
}; #endif
A [[Constructor (object-oriented programming)|constructor]] operation is a function with the same name as the class name.{{cite book | last = Stroustrup | first = Bjarne | title = The C++ Programming Language, Fourth Edition | publisher = Addison-Wesley | year = 2013 | page = 49 | isbn = 978-0-321-56384-2 }} It is executed when the calling operation executes the [[new and delete (C++)|new]] statement.
A module's other file is the [[source file]]. Here is a C++ source file for the ''GRADE class'' in a simple school application:
GRADE::GRADE( const char letter ) { // Reference the object using the keyword 'this'. // ---------------------------------------------- this->letter = letter;
// This is Temporal Cohesion
// -------------------------
this->numeric = grade_numeric( letter );
}
int GRADE::grade_numeric( const char letter ) { if ( ( letter == 'A' || letter == 'a' ) ) return 4; else if ( ( letter == 'B' || letter == 'b' ) ) return 3; else if ( ( letter == 'C' || letter == 'c' ) ) return 2; else if ( ( letter == 'D' || letter == 'd' ) ) return 1; else if ( ( letter == 'F' || letter == 'f' ) ) return 0; else return -1; }
Here is a C++ ''header file'' for the ''PERSON class'' in a simple school application:
class PERSON { public: PERSON ( const char *name ); const char *name; }; #endif
Here is a C++ ''source file'' for the ''PERSON class'' in a simple school application:
PERSON::PERSON ( const char *name ) { this->name = name; }
Here is a C++ ''header file'' for the ''STUDENT class'' in a simple school application:
#include "person.h" #include "grade.h"
// A STUDENT is a subset of PERSON. // -------------------------------- class STUDENT : public PERSON{ public: STUDENT ( const char *name ); GRADE *grade; }; #endif
Here is a C++ ''source file'' for the ''STUDENT class'' in a simple school application:
STUDENT::STUDENT ( const char *name ): // Execute the constructor of the PERSON superclass. // ------------------------------------------------- PERSON( name ) { // Nothing else to do. // ------------------- }
Here is a driver program for demonstration:
int main( void ) { STUDENT *student = new STUDENT( "The Student" ); student->grade = new GRADE( 'a' );
std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;
}
Here is a [[makefile]] to compile everything:
clean: rm student_dvr *.o
student_dvr: student_dvr.cpp grade.o student.o person.o c++ student_dvr.cpp grade.o student.o person.o -o student_dvr
grade.o: grade.cpp grade.h c++ -c grade.cpp
student.o: student.cpp student.h c++ -c student.cpp
person.o: person.cpp person.h c++ -c person.cpp
===Declarative languages=== {{main|Declarative programming}}
''Imperative languages'' have one major criticism: assigning an expression to a ''non-local'' variable may produce an unintended [[Side effect (computer science)|side effect]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 218 | isbn = 0-201-71012-9 }} [[Declarative language]]s generally omit the assignment statement and the control flow. They describe ''what'' computation should be performed and not ''how'' to compute it. Two broad categories of declarative languages are [[functional language]]s and [[Logic programming|logical languages]].
The principle behind a ''functional language'' is to use [[lambda calculus]] as a guide for a well defined [[Semantics (computer science)|semantic]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 217 | isbn = 0-201-71012-9 }} In mathematics, a function is a rule that maps elements from an ''expression'' to a range of ''values''. Consider the function:
times_10(x) = 10 * x
The ''expression'' 10 * x is mapped by the function times_10() to a range of ''values''. One ''value'' happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:
times_10(2) = 20
A ''functional language'' compiler will not store this value in a variable. Instead, it will ''push'' the value onto the computer's [[Call stack|stack]] before setting the [[program counter]] back to the calling function. The calling function will then ''pop'' the value from the stack.{{cite book | last = Weiss | first = Mark Allen | title = Data Structures and Algorithm Analysis in C++ | publisher = Benjamin/Cummings Publishing Company, Inc. | year = 1994 | page = 103 | isbn = 0-8053-5443-3 | quote = When there is a function call, all the important information needs to be saved, such as register values (corresponding to variable names) and the return address (which can be obtained from the program counter)[.] ... When the function wants to return, it ... restores all the registers. It then makes the return jump. Clearly, all of this work can be done using a stack, and that is exactly what happens in virtually every programming language that implements recursion. }}
''Imperative languages'' do support functions. Therefore, ''functional programming'' can be achieved in an imperative language, if the programmer uses discipline. However, a ''functional language'' will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the ''what''.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 230 | isbn = 0-201-71012-9 }}
A functional program is developed with a set of primitive functions followed by a single driver function. Consider the [[Snippet (programming)|snippet]]:
function max( a, b ){/* code omitted */}
function min( a, b ){/* code omitted */}
function range( a, b, c ) { :return max( a, max( b, c ) ) - min( a, min( b, c ) ); }
The primitives are max() and min(). The driver function is range(). Executing:
put( range( 10, 4, 7) ); will output 6.
''Functional languages'' are used in [[computer science]] research to explore new language features.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 240 | isbn = 0-201-71012-9 }} Moreover, their lack of side-effects have made them popular in [[parallel programming]] and [[concurrent programming]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 241 | isbn = 0-201-71012-9 }} However, application developers prefer the [[object-oriented programming|object-oriented features]] of ''imperative languages''.
====Lisp==== [[Lisp (programming language)|Lisp]] (1958) stands for "LISt Processor".{{cite book | last1=Jones | first1=Robin | last2=Maynard | first2=Clive | last3=Stewart | first3=Ian | title=The Art of Lisp Programming | date=December 6, 2012 | publisher=Springer Science & Business Media | isbn=9781447117193 | page=2}} It is tailored to process [[List (abstract data type)|lists]]. A full structure of the data is formed by building lists of lists. In memory, a [[tree data structure]] is built. Internally, the tree structure lends nicely for [[Recursion (computer science)|recursive]] functions.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 220 | isbn = 0-201-71012-9 }} The syntax to build a tree is to enclose the space-separated [[Element (mathematics)|elements]] within parenthesis. The following is a [[list]] of three elements. The first two elements are themselves lists of two elements:
((A B) (HELLO WORLD) 94)
Lisp has functions to extract and reconstruct elements.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 221 | isbn = 0-201-71012-9 }} The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:
cons(head(x), tail(x))
One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp [[Integrated development environment|environments]] help ensure parenthesis match. As an aside, Lisp does support the ''imperative language'' operations of the assignment statement and goto loops.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 229 | isbn = 0-201-71012-9 }} Also, ''Lisp'' is not concerned with the [[datatype]] of the elements at compile time.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 227 | isbn = 0-201-71012-9 }} Instead, it assigns (and may reassign) the datatypes at [[Runtime (program lifecycle phase)|runtime]]. Assigning the datatype at runtime is called [[Name binding#Binding time|dynamic binding]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 222 | isbn = 0-201-71012-9 }} Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the [[software development process]].
Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent ''imperative language'' program. ''Lisp'' is widely used in [[artificial intelligence]]. However, its usage has been accepted only because it has ''imperative language'' operations, making unintended side-effects possible.
====ML==== [[ML (programming language)|ML]] (1973){{cite web | last = Gordon | first = Michael J. C. | author-link = Michael J. C. Gordon | year = 1996 | title = From LCF to HOL: a short history | url = http://www.cl.cam.ac.uk/~mjcg/papers/HolHistory.html | access-date = 2021-10-30 | archive-date = 2016-09-05 | archive-url = https://web.archive.org/web/20160905201847/http://www.cl.cam.ac.uk/~mjcg/papers/HolHistory.html | url-status = live }} stands for "Meta Language". ML checks to make sure only data of the same type are compared with one another.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 233 | isbn = 0-201-71012-9 }} For example, this function has one input parameter (an integer) and returns an integer:
{{sxhl|2=sml|1=fun times_10(n : int) : int = 10 * n;}}
''ML'' is not parenthesis-eccentric like ''Lisp''. The following is an application of times_10():
times_10 2
It returns "20 : int". (Both the results and the datatype are returned.)
Like ''Lisp'', ''ML'' is tailored to process lists. Unlike ''Lisp'', each element is the same datatype.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 235 | isbn = 0-201-71012-9 }} Moreover, ''ML'' assigns the datatype of an element at [[compile time]]. Assigning the datatype at compile time is called [[Name binding#Binding time|static binding]]. Static binding increases reliability because the compiler checks the context of variables before they are used.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 55 | isbn = 0-201-71012-9 }}
====Prolog==== [[Prolog]] (1972) stands for "PROgramming in LOGic". It is a [[logic programming]] language, based on formal [[logic]]. The language was developed by [[Alain Colmerauer]] and Philippe Roussel in Marseille, France. It is an implementation of [[SLD resolution|Selective Linear Definite clause resolution]], pioneered by [[Robert Kowalski]] and others at the [[University of Edinburgh]].{{Cite journal | publisher = Association for Computing Machinery | doi = 10.1145/155360.155362 | first1 = A. | last1 = Colmerauer | first2 = P. | last2 = Roussel | title = The birth of Prolog | journal = ACM SIGPLAN Notices | volume = 28 | issue = 3 | page = 5 | year = 1992 | url=http://alain.colmerauer.free.fr/alcol/ArchivesPublications/PrologHistory/19november92.pdf}}
The building blocks of a Prolog program are ''facts'' and ''rules''. Here is a simple example: cat(tom). % tom is a cat mouse(jerry). % jerry is a mouse
animal(X) :- cat(X). % each cat is an animal animal(X) :- mouse(X). % each mouse is an animal
big(X) :- cat(X). % each cat is big small(X) :- mouse(X). % each mouse is small
eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese eat(X,Y) :- big(X), small(Y). % each big animal eats each small animal
After all the facts and rules are entered, then a question can be asked: : Will Tom eat Jerry? ?- eat(tom,jerry). true
The following example shows how Prolog will convert a letter grade to its numeric value: numeric_grade('A', 4). numeric_grade('B', 3). numeric_grade('C', 2). numeric_grade('D', 1). numeric_grade('F', 0). numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X = 'D', not X = 'F'. grade('The Student', 'A'). ?- grade('The Student', X), numeric_grade(X, Y). X = 'A', Y = 4
Here is a comprehensive example:Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In Prolog: The Next 50 Years (pp. 287–299). Cham: Springer Nature Switzerland.
- All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
billows_fire(X) :- is_a_dragon(X). 2) A creature billows fire if one of its parents billows fire: billows_fire(X) :- is_a_creature(X), is_a_parent_of(Y,X), billows_fire(Y). 3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y: is_a_parent_of(X, Y):- is_the_mother_of(X, Y). is_a_parent_of(X, Y):- is_the_father_of(X, Y).
- A thing is a creature if the thing is a dragon:
is_a_creature(X) :- is_a_dragon(X).
- Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.
Rule (2) is a [[Recursion (computer science)|recursive]] (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.
Rule (3) shows how [[Function (computer programming)|functions]] are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.
Prolog is an untyped language. Nonetheless, [[Inheritance (object-oriented programming)|inheritance]] can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.
Questions are answered using [[backward reasoning]]. Given the question:
?- billows_fire(X). Prolog generates two answers : X = norberta X = puff
Practical applications for Prolog are [[knowledge representation]] and [[problem solving]] in [[artificial intelligence]].
===Object-oriented programming=== [[Object-oriented programming]] is a programming method to execute [[Method (computer programming)|operations]] ([[Function (computer programming)|functions]]) on [[Object (computer science)|objects]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 35 | isbn = 0-201-71012-9 | quote = Simula was based on Algol 60 with one very important addition — the class concept. ... The basic idea was that the data (or data structure) and the operations performed on it belong together[.] }} The basic idea is to group the characteristics of a [[phenomenon]] into an object [[Record (computer science)|container]] and give the container a name. The ''operations'' on the phenomenon are also grouped into the container. ''Object-oriented programming'' developed by combining the need for containers and the need for safe [[functional programming]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 39 | isbn = 0-201-71012-9 | quote = Originally, a large number of experimental languages were designed, many of which combined object-oriented and functional programming. }} This programming method need not be confined to an ''object-oriented language''.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 284 | isbn = 0-256-08515-3 | quote = While it is true that OOD [(object oriented design)] as such is not supported by the majority of popular languages, a large subset of OOD can be used. }} In an object-oriented language, an object container is called a [[Class (programming)|class]]. In a non-object-oriented language, a [[data structure]] (which is also known as a [[Record (computer science)|record]]) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an [[abstract datatype]].{{cite book | last = Weiss | first = Mark Allen | title = Data Structures and Algorithm Analysis in C++ | publisher = Benjamin/Cummings Publishing Company, Inc. | year = 1994 | page = 57 | isbn = 0-8053-5443-3 }} However, [[Inheritance (object-oriented programming)|inheritance]] will be missing. Nonetheless, this shortcoming can be overcome.
Here is a [[C programming language]] ''header file'' for the ''GRADE abstract datatype'' in a simple school application:
/* Used to allow multiple source files to include / / this header file without duplication errors. / / ---------------------------------------------- */ #ifndef GRADE_H #define GRADE_H
typedef struct { char letter; } GRADE;
/* Constructor / / ----------- */ GRADE *grade_new( char letter );
int grade_numeric( char letter ); #endif
The grade_new() function performs the same algorithm as the C++ [[Constructor (object-oriented programming)|constructor]] operation.
Here is a C programming language ''[[source file]]'' for the ''GRADE abstract datatype'' in a simple school application:
GRADE *grade_new( char letter ) { GRADE *grade;
/* Allocate heap memory */
/* -------------------- */
if ( ! ( grade = calloc( 1, sizeof ( GRADE ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
__FILE__,
__FUNCTION__,
__LINE__ );
exit( 1 );
}
grade->letter = letter;
return grade;
}
int grade_numeric( char letter ) { if ( ( letter == 'A' || letter == 'a' ) ) return 4; else if ( ( letter == 'B' || letter == 'b' ) ) return 3; else if ( ( letter == 'C' || letter == 'c' ) ) return 2; else if ( ( letter == 'D' || letter == 'd' ) ) return 1; else if ( ( letter == 'F' || letter == 'f' ) ) return 0; else return -1; }
In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero.
Here is a C programming language ''header file'' for the ''PERSON abstract datatype'' in a simple school application:
typedef struct { char *name; } PERSON;
/* Constructor / / ----------- */ PERSON *person_new( char *name ); #endif
Here is a C programming language ''source file'' for the ''PERSON abstract datatype'' in a simple school application:
PERSON *person_new( char *name ) { PERSON *person;
if ( ! ( person = calloc( 1, sizeof ( PERSON ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
__FILE__,
__FUNCTION__,
__LINE__ );
exit( 1 );
}
person->name = name;
return person;
}
Here is a C programming language ''header file'' for the ''STUDENT abstract datatype'' in a simple school application:
#include "person.h" #include "grade.h"
typedef struct { /* A STUDENT is a subset of PERSON. / / -------------------------------- */ PERSON *person;
GRADE *grade;
} STUDENT;
/* Constructor / / ----------- */ STUDENT *student_new( char *name ); #endif
Here is a C programming language ''source file'' for the ''STUDENT abstract datatype'' in a simple school application:
STUDENT *student_new( char *name ) { STUDENT *student;
if ( ! ( student = calloc( 1, sizeof ( STUDENT ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
__FILE__,
__FUNCTION__,
__LINE__ );
exit( 1 );
}
/* Execute the constructor of the PERSON superclass. */
/* ------------------------------------------------- */
student->person = person_new( name );
return student;
}
Here is a driver program for demonstration:
int main( void ) { STUDENT *student = student_new( "The Student" ); student->grade = grade_new( 'a' );
printf( "%s: Numeric grade = %d\n",
/* Whereas a subset exists, inheritance does not. */
student->person->name,
/* Functional programming is executing functions just-in-time (JIT) */
grade_numeric( student->grade->letter ) );
return 0;
}
Here is a [[makefile]] to compile everything:
clean: rm student_dvr *.o
student_dvr: student_dvr.c grade.o student.o person.o gcc student_dvr.c grade.o student.o person.o -o student_dvr
grade.o: grade.c grade.h gcc -c grade.c
student.o: student.c student.h gcc -c student.c
person.o: person.c person.h gcc -c person.c
The formal strategy to build object-oriented objects is to:{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 285 | isbn = 0-256-08515-3 }}
- Identify the objects. Most likely these will be nouns.
- Identify each object's attributes. What helps to describe the object?
- Identify each object's actions. Most likely these will be verbs.
- Identify the relationships from object to object. Most likely these will be verbs.
For example:
- A person is a human identified by a name.
- A grade is an achievement identified by a letter.
- A student is a person who earns a grade.
===Syntax and semantics=== [[File:Terminal and non-terminal symbols example.png|300px|thumb|right|Production rules consist of a set of terminals and non-terminals.]]
The [[Syntax (programming languages)|syntax]] of a ''computer program'' is a [[list]] of [[Production (computer science)|production rules]] which form its [[formal grammar|grammar]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 290 | quote = The syntax (or grammar) of a programming language describes the correct form in which programs may be written[.] | isbn = 0-201-71012-9 }} A programming language's grammar correctly places its [[Declaration (computer programming)|declarations]], [[Expression (computer science)|expressions]], and [[Statement (computer science)|statements]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 78 | isbn = 0-201-71012-9 | quote = The main components of an imperative language are declarations, expressions, and statements. }} Complementing the ''syntax'' of a language are its [[Semantics (computer science)|semantics]]. The ''semantics'' describe the meanings attached to various syntactic constructs.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 290 | isbn = 0-201-71012-9 }} A syntactic construct may need a semantic description because a production rule may have an invalid interpretation.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 294 | isbn = 0-201-71012-9 }} Also, different languages might have the same syntax; however, their behaviors may be different.
The syntax of a language is formally described by listing the production rules. Whereas the syntax of a [[natural language]] is extremely complicated, a subset of the English language can have this production rule listing:{{cite book | last = Rosen | first = Kenneth H. | title = Discrete Mathematics and Its Applications | publisher = McGraw-Hill, Inc. | year = 1991 | page = [https://archive.org/details/discretemathemat00rose/page/615 615] | isbn = 978-0-07-053744-6 | url = https://archive.org/details/discretemathemat00rose/page/615}}
a '''sentence''' is made up of a '''noun-phrase''' followed by a '''verb-phrase''';
a '''noun-phrase''' is made up of an '''article''' followed by an '''adjective''' followed by a '''noun''';
a '''verb-phrase''' is made up of a '''verb''' followed by a '''noun-phrase''';
an '''article''' is 'the';
an '''adjective''' is 'big' or
an '''adjective''' is 'small';
a '''noun''' is 'cat' or
a '''noun''' is 'mouse';
a '''verb''' is 'eats';
The words in '''bold-face''' are known as ''non-terminals''. The words in 'single quotes' are known as ''terminals''.{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 291 | isbn = 0-201-71012-9 }}
From this production rule listing, complete sentences may be formed using a series of replacements.{{cite book | last = Rosen | first = Kenneth H. | title = Discrete Mathematics and Its Applications | publisher = McGraw-Hill, Inc. | year = 1991 | page = [https://archive.org/details/discretemathemat00rose/page/616 616] | isbn = 978-0-07-053744-6 | url = https://archive.org/details/discretemathemat00rose/page/616}} The process is to replace ''non-terminals'' with either a valid ''non-terminal'' or a valid ''terminal''. The replacement process repeats until only ''terminals'' remain. One valid sentence is:
- '''sentence'''
- '''noun-phrase''' '''verb-phrase'''
- '''article''' '''adjective''' '''noun''' '''verb-phrase'''
- ''the'' '''adjective''' '''noun''' '''verb-phrase'''
- ''the'' ''big'' '''noun''' '''verb-phrase'''
- ''the'' ''big'' ''cat'' '''verb-phrase'''
- ''the'' ''big'' ''cat'' '''verb''' '''noun-phrase'''
- ''the'' ''big'' ''cat'' ''eats'' '''noun-phrase'''
- ''the'' ''big'' ''cat'' ''eats'' '''article''' '''adjective''' '''noun'''
- ''the'' ''big'' ''cat'' ''eats'' ''the'' '''adjective''' '''noun'''
- ''the'' ''big'' ''cat'' ''eats'' ''the'' ''small'' '''noun'''
- ''the'' ''big'' ''cat'' ''eats'' ''the'' ''small'' ''mouse''
However, another combination results in an invalid sentence:
- ''the'' ''small'' ''mouse'' ''eats'' ''the'' ''big'' ''cat'' Therefore, a ''semantic'' is necessary to correctly describe the meaning of an ''eat'' activity.
One ''production rule'' listing method is called the [[Backus–Naur form]] (BNF).{{cite book | last = Rosen | first = Kenneth H. | title = Discrete Mathematics and Its Applications | publisher = McGraw-Hill, Inc. | year = 1991 | page = [https://archive.org/details/discretemathemat00rose/page/623 623] | isbn = 978-0-07-053744-6 | url = https://archive.org/details/discretemathemat00rose/page/623}} BNF describes the syntax of a language and itself has a ''syntax''. This recursive definition is an example of a [[metalanguage]]. The ''syntax'' of BNF includes:
- ::= which translates to ''is made up of a[n]'' when a non-terminal is to its right. It translates to ''is'' when a terminal is to its right.
- | which translates to ''or''.
- < and > which surround '''non-terminals'''.
Using BNF, a subset of the English language can have this ''production rule'' listing: ::= ::= ::=
Using BNF, a signed-[[Integer (computer science)|integer]] has the ''production rule'' listing:{{cite book | last = Rosen | first = Kenneth H. | title = Discrete Mathematics and Its Applications | publisher = McGraw-Hill, Inc. | year = 1991 | page = [https://archive.org/details/discretemathemat00rose/page/624 624] | isbn = 978-0-07-053744-6 | url = https://archive.org/details/discretemathemat00rose/page/624}} ::= ::= + | - ::= | ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Notice the recursive production rule: ::= | This allows for an infinite number of possibilities. Therefore, a ''semantic'' is necessary to describe a limitation of the number of digits.
Notice the leading zero possibility in the production rules: ::= | ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 Therefore, a ''semantic'' is necessary to describe that leading zeros need to be ignored.
Two formal methods are available to describe ''semantics''. They are [[denotational semantics]] and [[axiomatic semantics]].{{cite book | last = Wilson | first = Leslie B. | title = Comparative Programming Languages, Third Edition | publisher = Addison-Wesley | year = 2001 | page = 297 | isbn = 0-201-71012-9 }}
==Software engineering and computer programming== [[File:Two women operating ENIAC (full resolution).jpg|thumb|right|Prior to programming languages, [[Jean Bartik|Betty Jennings]] and [[Fran Bilas]] programmed the [[ENIAC]] by moving cables and setting switches.]]
[[Software engineering]] is a variety of techniques to produce [[software quality|quality]] ''computer programs''.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = Preface | isbn = 0-256-08515-3 }} [[Computer programming]] is the process of writing or editing [[source code]]. In a formal environment, a [[systems analyst]] will gather information from managers about all the organization's processes to automate. This professional then prepares a [[Functional requirement|detailed plan]] for the new or modified system.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 507 | isbn = 0-619-06489-7 }} The plan is analogous to an architect's blueprint.
===Performance objectives=== The systems analyst has the objective to deliver the right information to the right person at the right time.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 513 | isbn = 0-619-06489-7 }} The critical factors to achieve this objective are:
The quality of the output. Is the output useful for decision-making?
The accuracy of the output. Does it reflect the true situation?
The format of the output. Is the output easily understood?
The speed of the output. Time-sensitive information is important when communicating with the customer in real time.
===Cost objectives=== Achieving performance objectives should be balanced with all of the costs, including:{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 514 | isbn = 0-619-06489-7 }}
Development costs.
Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system.
Hardware costs.
Operating costs.
Applying a [[Systems development life cycle|systems development process]] will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 516 | isbn = 0-619-06489-7 }}
===Waterfall model=== The [[waterfall model]] is an implementation of a ''systems development process''.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 8 | isbn = 0-256-08515-3 }} As the ''waterfall'' label implies, the basic phases overlap each other:{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 517 | isbn = 0-619-06489-7 }}
The ''investigation phase'' is to understand the underlying problem.
The ''analysis phase'' is to understand the possible solutions.
The ''design phase'' is to [[Software design|plan]] the best solution.
The ''implementation phase'' is to program the best solution.
The ''maintenance phase'' lasts throughout the life of the system. Changes to the system after it is deployed may be necessary.{{cite book
| last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 345 | isbn = 0-256-08515-3 }} Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment.
===Computer programmer=== A [[computer programmer]] is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 319 | isbn = 0-256-08515-3 }} However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.
Computer programmers may be [[Programming in the large and programming in the small#Programming in the small|programming in the small]]: programming within a single module.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 331 | isbn = 0-256-08515-3 }} Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be [[programming in the large]]: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the [[application programming interface]] (API).
===Program modules=== [[Modular programming]] is a technique to refine ''imperative language'' programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate [[software aging]]. A ''program module'' is a sequence of statements that are bounded within a [[Block (programming)|block]] and together identified by a name.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 216 | isbn = 0-256-08515-3 }} Modules have a ''function'', ''context'', and ''logic'':{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 219 | isbn = 0-256-08515-3 }}
- The ''function'' of a module is what it does.
- The ''context'' of a module are the elements being performed upon.
- The ''logic'' of a module is how it performs the function.
The module's name should be derived first by its ''function'', then by its ''context''. Its ''logic'' should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not.
The degree of interaction ''within'' a module is its level of [[Cohesion (computer science)|cohesion]]. ''Cohesion'' is a judgment of the relationship between a module's name and its ''function''. The degree of interaction ''between'' modules is the level of [[Coupling (computer science)|coupling]].{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 226 | isbn = 0-256-08515-3 }} ''Coupling'' is a judgement of the relationship between a module's ''context'' and the elements being performed upon.
===Cohesion=== The levels of cohesion from worst to best are:{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 220 | isbn = 0-256-08515-3 }}
- ''Coincidental Cohesion'': A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements."
- Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition, a, b ).
- ''Temporal Cohesion'': A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ...
- ''Procedural Cohesion'': A module has procedural cohesion if it performs multiple loosely related functions. For example, function read_part_number_update_employee_record().
- ''Communicational Cohesion'': A module has communicational cohesion if it performs multiple closely related functions. For example, function read_part_number_update_sales_record().
- ''Informational Cohesion'': A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level.
- ''Functional Cohesion'': a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts.
===Coupling=== The levels of coupling from worst to best are:
- ''Content Coupling'': A module has content coupling if it modifies a [[local variable]] of another function. COBOL used to do this with the ''alter'' verb.
- ''Common Coupling'': A module has common coupling if it modifies a global variable.
- ''Control Coupling'': A module has control coupling if another module can modify its [[control flow]]. For example, perform_arithmetic( perform_addition, a, b ). Instead, control should be on the makeup of the returned object.
- ''Stamp Coupling'': A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level.
- '' Data Coupling'': A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object.
===Data flow analysis=== [[File:Sandwich data flow diagram.pdf|thumb|A sample function-level data-flow diagram]] ''Data flow analysis'' is a design method used to achieve modules of ''functional cohesion'' and ''data coupling''.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 258 | isbn = 0-256-08515-3 }} The input to the method is a [[data-flow diagram]]. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.
The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A [[Daisy chain (electrical engineering)|daisy chain]] of ovals will convey an entire [[algorithm]]. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.{{cite book | last = Schach | first = Stephen R. | title = Software Engineering | publisher = Aksen Associates Incorporated Publishers | year = 1990 | page = 259 | isbn = 0-256-08515-3 }}
==Functional categories== [[File:Operating system placement (software).svg|thumb|upright|A diagram showing that the [[User (computing)|user]] interacts with the [[application software]]. The application software interacts with the [[operating system]], which interacts with the [[Computer hardware|hardware]].]]
''Computer programs'' may be categorized along functional lines. The main functional categories are [[application software]] and [[system software]]. System software includes the [[operating system]], which couples [[computer hardware]] with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner.{{cite book | last = Silberschatz | first = Abraham | title = Operating System Concepts, Fourth Edition | publisher = Addison-Wesley | year = 1994 | page = 1 | isbn = 978-0-201-50480-4 }} Both application software and system software execute [[Utility software|utility programs]]. At the hardware level, a [[Microcode|microcode program]] controls the circuits throughout the [[central processing unit]].
===Application software=== {{Main|Application software}} Application software is the key to unlocking the potential of the computer system.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 147 | isbn = 0-619-06489-7 | quote = The key to unlocking the potential of any computer system is application software. }} [[Enterprise application software]] bundles accounting, personnel, customer, and vendor applications. Examples include [[enterprise resource planning]], [[customer relationship management]], and [[supply chain management software]].
Enterprise applications may be developed in-house as a one-of-a-kind [[proprietary software]].{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 147 | isbn = 0-619-06489-7 }} Alternatively, they may be purchased as [[off-the-shelf software]]. Purchased software may be modified to provide [[custom software]]. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 147 | isbn = 0-619-06489-7 | quote = [A] third-party software firm, often called a value-added software vendor, may develop or modify a software program to meet the needs of a particular industry or company. }}
The potential advantages of in-house software are features and reports may be developed exactly to specification.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 148 | isbn = 0-619-06489-7 | quote = Heading: Proprietary Software; Subheading: Advantages; Quote: You can get exactly what you need in terms of features, reports, and so on. }} Management may also be involved in the development process and offer a level of control.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 148 | isbn = 0-619-06489-7 | quote = Heading: Proprietary Software; Subheading: Advantages; Quote: Being involved in the development offers a further level of control over the results. }} Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 147 | isbn = 0-619-06489-7 | quote = Heading: Proprietary Software; Subheading: Advantages; Quote: There is more flexibility in making modifications that may be required to counteract a new initiative by one of your competitors or to meet new supplier and/or customer requirements. }} A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming.
The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.
====Application service provider==== One approach to economically obtaining a customized enterprise application is through an [[application service provider]].{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 149 | isbn = 0-619-06489-7 }} Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.
===Operating system=== {{See also|Operating system}} [[File:Concepts- Program vs. Process vs. Thread.jpg|thumb|Program vs. [[Process (computing)|Process]] vs. [[Thread (computing)|Thread]] [[Scheduling (computing)|Scheduling]], [[Preemption (computing)|Preemption]], [[Context switch|Context Switching]]|upright=1.8]] An [[operating system]] is the low-level software that supports a computer's basic functions, such as [[Scheduling (computing)|scheduling]] [[Process (computing)|processes]] and controlling [[peripheral]]s.
In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an ''operating system'' was kept in the computer at all times.{{cite book |url=https://archive.org/details/structuredcomput00tane/page/11 |title=Structured Computer Organization, Third Edition |last=Tanenbaum |first=Andrew S. |publisher=Prentice Hall |year=1990 |isbn=978-0-13-854662-5 |page=[https://archive.org/details/structuredcomput00tane/page/11 11]}}
The term ''operating system'' may refer to two levels of software.{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |page=21}} The operating system may refer to the [[Kernel (operating system)|kernel program]] that manages the [[Process (computing)|processes]], [[Computer memory|memory]], and [[Peripheral|devices]]. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, [[Command-line interface|command-line interpreter]], [[graphical user interface]], [[Utility software|utility programs]], and [[Source-code editor|editor]].
====Kernel Program==== [[File:Kernel Layout.svg|thumb|A kernel connects the application software to the hardware of a computer.]] The kernel's main purpose is to manage the limited resources of a computer:
- The kernel program should perform [[process scheduling]],{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |page=22}} which is also known as a [[context switch]]. The kernel creates a [[process control block]] when a ''computer program'' is [[Loader (computing)|selected for execution]]. However, an executing program gets exclusive access to the [[central processing unit]] only for a [[time slice]]. To provide each user with the [[Time-sharing|appearance of continuous access]], the kernel quickly [[Preemption (computing)|preempts]] each process control block to execute another one. The goal for [[Systems programming|system developers]] is to minimize [[dispatch latency]]. [[File:Virtual memory.svg|thumb|250px|Physical memory is scattered around RAM and the hard disk. Virtual memory is one continuous block.]]
- The kernel program should perform [[memory management]]. :* When the kernel initially [[Loader (computing)|loads]] an executable into memory, it divides the address space logically into [[Region-based memory management|regions]].{{cite book | last = Bach | first = Maurice J. | title = The Design of the UNIX Operating System | publisher = Prentice-Hall, Inc. | year = 1986 | page = 152 | isbn = 0-13-201799-7 }} The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running [[Process (computing)|process]]. These tables constitute the [[virtual address space]]. The master-region table is used to determine where its contents are located in [[physical memory]]. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. :The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable. : To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely. :The kernel is responsible for translating virtual addresses into [[physical address]]es. The kernel may request data from the [[memory controller]] and, instead, receive a [[page fault]].{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 443 | isbn = 978-0-13-291652-3 }} If so, the kernel accesses the [[memory management unit]] to populate the physical data region and translate the address.{{cite book | last = Lacamera | first = Daniele | title = Embedded Systems Architecture | publisher = Packt | year = 2018 | page = 8 | isbn = 978-1-78883-250-2 }} : The kernel allocates memory from the ''heap'' upon request by a process. When the process is finished with the memory, the process may request for it to be [[Manual memory management|freed]]. If the process exits without requesting all allocated memory to be freed, then the kernel performs [[Garbage collection (computer science)|garbage collection]] to free the memory. :* The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes.
- The kernel program should perform [[File system|file system management]]. The kernel has instructions to create, retrieve, update, and delete files.
- The kernel program should perform [[Peripheral|device management]]. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time.
- The kernel program should perform [[network management]].{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |page=23}} The kernel transmits and receives [[Network packet|packets]] on behalf of processes. One key service is to find an efficient [[Routing table|route]] to the target system.
- The kernel program should provide [[system calls|system level functions]] for programmers to use.{{cite book |title=The Unix Programming Environment |last=Kernighan |first=Brian W. |publisher=Prentice Hall |year=1984 |isbn=0-13-937699-2 |page=201}} ** Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, [[file descriptor]]s, file seeking, physical reading, and physical writing. ** Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. ** Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface.{{cite book |title=The Linux Programming Interface |last=Kerrisk |first=Michael |publisher=No Starch Press |year=2010 |isbn=978-1-59327-220-3 |page=187}}
- The kernel program should provide a [[Inter-process communication|communication channel]] between executing processes.{{cite book |title=Unix System Programming |last=Haviland |first=Keith |publisher=Addison-Wesley Publishing Company |year=1987 |isbn=0-201-12919-1 |page=121}} For a large software system, it may be desirable to [[Software engineering|engineer]] the system into smaller processes. Processes may communicate with one another by sending and receiving [[Signal (IPC)|signals]].
Originally, operating systems were programmed in [[assembly language|assembly]]; however, modern operating systems are typically written in higher-level languages like [[C (programming language)|C]], [[Objective-C]], and [[Swift (programming language)|Swift]].{{efn|The [[UNIX]] operating system was written in C, [[macOS]] was written in Objective-C, and Swift replaced Objective-C.}}
===Utility program=== A [[utility (computing)|utility]] is a program that aids system administration and software execution. An operating system typically provides utilities to check hardware such as storage, memory, speakers, and printers.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 145 | isbn = 0-619-06489-7 }} A utility may optimize the performance of a storage device. System utilities monitor hardware and network performance and may trigger an alert when a metric is outside the nominal range.{{cite book | last = Stair | first = Ralph M. | title = Principles of Information Systems, Sixth Edition | publisher = Thomson | year = 2003 | page = 146 | isbn = 0-619-06489-7 }} A utility may compress files to reduce storage space and network transmission time. A utility may sort and merge data sets or detect [[computer virus]]es.
===Microcode program=== {{main|Microcode}} [[File:Not-gate-en.svg|thumb|96px|right|NOT gate]] [[File:NAND_ANSI_Labelled.svg|thumb|96px|right|NAND gate]] [[File:NOR_ANSI_Labelled.svg|thumb|96px|right|NOR gate]] [[File:AND_ANSI_Labelled.svg|thumb|96px|right|AND gate]] [[File:OR_ANSI_Labelled.svg|thumb|96px|right|OR gate]] A [[Microcode|microcode program]] is the bottom-level interpreter{{efn|The bottom-level interpreter is technically called the Level 1 layer. The Level 0 layer is the digital logic layer. Three middle layers exist, and the Level 5 layer is the Problem-oriented language layer.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 5 | isbn = 978-0-13-291652-3 }}}} that controls the [[datapath]] of software-driven computers.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 6 | isbn = 978-0-13-291652-3 }} (Advances in [[Random logic|hardware]] have migrated these operations to [[Control unit#Hardwired control unit|hardware execution circuits]].) Microcode instructions allow the programmer to more easily implement the [[Logic level|digital logic level]]{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 243 | isbn = 978-0-13-291652-3 }}—the computer's real hardware. The digital logic level is the boundary between [[computer science]] and [[computer engineering]].{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 147 | isbn = 978-0-13-291652-3 }}
A [[logic gate]] is a tiny [[Field-effect transistor|transistor]] that can return one of two signals: on or off.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 148 | isbn = 978-0-13-291652-3 }}
- Having one transistor forms the [[NOT gate]].
- Connecting two transistors in series forms the [[NAND gate]].
- Connecting two transistors in parallel forms the [[NOR gate]].
- Connecting a NOT gate to a NAND gate forms the [[AND gate]].
- Connecting a NOT gate to a NOR gate forms the [[OR gate]].
These five gates form the building blocks of [[Boolean algebra|binary algebra]]—the digital logic functions of the computer.
Microcode instructions are [[Assembly language#Mnemonics|mnemonics]] programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a [[central processing unit|central processing unit's]] (CPU) [[control store]].{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 253 | isbn = 978-0-13-291652-3 }} These hardware-level instructions move data throughout the [[data path]].
The micro-instruction cycle begins when the [[microsequencer]] uses its microprogram counter to ''fetch'' the next [[machine instruction]] from [[random-access memory]].{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 255 | isbn = 978-0-13-291652-3 }} The next step is to ''decode'' the machine instruction by selecting the proper output line to the hardware module.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 161 | isbn = 978-0-13-291652-3 }} The final step is to ''execute'' the instruction using the hardware module's set of gates.
[[File:ALU block.svg|thumb|right|A symbolic representation of an ALU]] Instructions to perform arithmetic are passed through an [[arithmetic logic unit]] (ALU).{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 166 | isbn = 978-0-13-291652-3 }} The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.
Microcode instructions move data between the CPU and the [[memory controller]]. Memory controller microcode instructions manipulate two [[Processor register|registers]]. The [[memory address register]] is used to access each memory cell's address. The [[memory data register]] is used to set and read each cell's contents.{{cite book | last = Tanenbaum | first = Andrew S. | title = Structured Computer Organization, Sixth Edition | publisher = Pearson | year = 2013 | page = 249 | isbn = 978-0-13-291652-3 }}
==Notes== {{Notelist}}
==References== {{reflist|30em}}
{{DEFAULTSORT:Computer Program}} [[Category:Computer programming]] [[Category:Software]]