Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Language: Linguistics · Semiotics · Speech
In linguistics and computer science, a context-free grammar (CFG) is a formal grammar in which every production rule is of the form
- V → w
where V is a non-terminal symbol and w is a string consisting of terminals and/or non-terminals. The term "context-free" comes from the fact that the non-terminal V can always be replaced by w, regardless of the context in which it occurs. A formal language is context-free if there is a context-free grammar that generates it.
Context-free grammars are powerful enough to describe the syntax of most programming languages; in fact, the syntax of most programming languages are specified using context-free grammars. On the other hand, context-free grammars are simple enough to allow the construction of efficient parsing algorithms which, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while LR and LL parsers only deal with more restrictive subsets of context-free grammars.
BNF (Backus-Naur Form) is the most common notation used to express context-free grammars.
Not all formal languages are context-free — a well-known counterexample is , the set of strings containing some number of a's, followed by the same number of b's and the same number of c's. This particular language can be generated by a parsing expression grammar, which is a relatively new formalism that is particularly well-suited to programming languages.
Contents
Formal definition
Just as any formal grammar, a context-free grammar G can be defined as a 4-tuple:
where
- is a finite set of terminals
- is a finite set of non-terminals
- is a finite set of production rules
- is an element of , the distinguished starting non-terminal.
- elements of are of the form
Examples
Example 1
A simple context-free grammar is
- S → aSb | ε
where | is used to separate multiple options for the same non-terminal, and ε stands for an empty string. This grammar generates the language which is not regular.
Example 2
Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z:
- S → x | y | z | S + S | S - S | S * S | S/S | (S)
This grammar can, for example, generate the string "( x + y ) * x - z * y / ( x + x )".
Example 3
A context-free grammar for the language consisting of all strings over {a,b} which contain a different number of a's to b's is
- S → U | V
- U → TaU | TaT
- V → TbV | TbT
- T → aTbT | bTaT | ε
Here, T can generate all strings with the same number of a's as b's, U generates all strings with more a's than b's and V generates all strings with fewer a's than b's.
Example 4
Another example of a context free language is . This is not a regular language, but it is context free as it can be generated by the following CFG (Context Free Grammar):
- S → aSc | B
- B → bBc | ε
Other examples
Context-free grammars are not limited in application to mathematical ("formal") languages. The grammar of Lojban, a spoken artificial language with an immense expressive power, is also context-free, and unambiguous. Recently, it has been suggested that a class of Tamil poetry called Venpa is governed by a context-free grammar.
Derivations and syntax trees
There are two common ways to describe how a given string can be derived from the start symbol of a given grammar. The simplest way is to list the consecutive strings of symbols, beginning with the start symbol and ending with the string, and the rules that have been applied. If we introduce a strategy such as "always replace the left-most nonterminal first" then for context-free grammars the list of applied grammar rules is by itself sufficient. This is called the leftmost derivation of a string. For example, if we take the following grammar:
- (1) S → S + S
- (2) S → 1
- (3) S → a
and the string "1 + 1 + a" then a left derivation of this string is the list [ (1), (1), (2), (2), (3) ]. Analogously the rightmost derivation is defined as the list that we get if we always replace the rightmost nonterminal first. In this case this could be the list [ (1), (3), (1), (2), (2)].
The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers.
A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example the structure of the string "1 + 1 + a" would, according to the leftmost derivation, be:
- S→S+S (1)
- S→S+S+S (1)
- S→1+S+S (2)
- S→1+1+S (2)
- S→1+1+a (3)
- { { { 1 }_{S} + { 1 }_{S} }_{S} + { a }_{S} }_{S}
where { ... }_{S} indicates a substring recognized as belonging to S. This hierarchy can also be seen as a tree:
S /|\ / | \ / | \ S '+' S /|\ | / | \ | S '+' S 'a' | | '1' '1'
This tree is called a concrete syntax tree (see also abstract syntax tree) of the string. In this case the presented leftmost and the rightmost derivation define the same syntax tree, however there is another (leftmost) derivation of the same string possible
- S→ S + S (1)
- S→ 1 + S (2)
- S→ 1 + S + S (1)
- S→ 1 + 1 + S (2)
- S→ 1 + 1 + a (3)
and this defines the following syntax tree:
S /|\ / | \ / | \ S '+' S | /|\ | / | \ '1' S '+' S | | '1' 'a'
If for certain strings in the language of the grammar there is more than one parsing tree then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply.
Normal forms
Every context-free grammar that does not generate the empty string can be transformed into an equivalent one in Chomsky normal form or Greibach normal form. "Equivalent" here means that the two grammars generate the same language.
Because of the especially simple form of production rules in Chomsky Normal Form grammars, this normal form has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky Normal Form to construct a polynomial-time algorithm which decides whether a given string is in the language represented by that grammar or not (the CYK algorithm).
Undecidable problems
Although some operations on context-free grammars are decidable due to their limited power, CFGs do have interesting undecidable problems. One of the simplest and most cited is the problem of deciding whether a CFG accepts the language of all strings. A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input. The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. We can construct a CFG that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input.
As a consequence of this, it is also undecidable whether two CFGs describe the same language, since we can't even decide whether a CFG is equivalent to the trivial CFG deciding the language of all strings.
On the other hand, the problem of determining whether a CFG accepts at least one string is decidable.
Properties of context-free languages
- An alternative and equivalent definition of context-free languages employs non-deterministic push-down automata: a language is context-free if and only if it can be accepted by such an automaton.
- A language can also be modeled as a set of all sequences of terminals which are accepted by the grammar. This model is helpful in understanding set operations on languages.
- The union and concatenation of two context-free languages is context-free, but the intersection need not be.
- The reverse of a context-free language is context-free, but the complement need not be.
- Every regular language is context-free because it can be described by a regular grammar.
- The intersection of a context-free language and a regular language is always context-free.
- There exist context-sensitive languages which are not context-free.
- To prove that a given language is not context-free, one may employ the pumping lemma for context-free languages.
- Another point worth mentioning is that the problem of determining if a context-sensitive grammar describes a context-free language is undecidable.
See also
- Parsing
- Formal grammar
- Parsing expression grammar
- Stochastic context-free grammar
References
- Michael Sipser (1997). Introduction to the Theory of Computation, PWS Publishing. ISBN 0-534-94728-X. Section 2.1: Context-Free Grammars, pp.91–101. Section 4.1.2: Decidable problems concerning context-free languages, pp.156–159. Section 5.1.1: Reductions via computation histories: pp.176–183.
Automata theory: formal languages and formal grammars | |||
---|---|---|---|
Chomsky hierarchy |
Grammars | Languages | Minimal automaton |
Type-0 | Unrestricted | Recursively enumerable | Turing machine |
n/a | (no common name) | Recursive | Decider |
Type-1 | Context-sensitive | Context-sensitive | Linear-bounded |
Type-2 | Context-free | Context-free | Pushdown |
Type-3 | Regular | Regular | Finite |
Each category of languages or grammars is a proper superset of the category directly beneath it. |
cs:Bezkontextová gramatika de:Kontextfreie Grammatik es:Gramática libre de contexto fi:Yhteydetön kielioppi fr:Grammaire hors-contexte sv:Kontextfri grammatik zh:上下文无关文法
This page uses Creative Commons Licensed content from Wikipedia (view authors). |