pyparsing API¶
pyparsing - Classes and methods to define and execute parsing grammars¶
Pyparsing is an alternative approach to creating and executing simple grammars, vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you don’t need to learn a new syntax for defining grammars or matching expressions - the parsing module provides a library of classes that you use to construct the grammar directly in Python.
Here is a program to parse “Hello, World!” (or any greeting of the form
"<salutation>, <addressee>!"), built up using Word,
Literal, and And elements
(the '+' operators create And expressions,
and the strings are auto-converted to Literal expressions):
from pyparsing import Word, alphas
# define grammar of a greeting
greet = Word(alphas) + "," + Word(alphas) + "!"
hello = "Hello, World!"
print(hello, "->", greet.parse_string(hello))
The program outputs the following:
Hello, World! -> ['Hello', ',', 'World', '!']
The Python representation of the grammar is quite readable, owing to the
self-explanatory class names, and the use of '+',
'|', '^' and '&' operators.
The ParseResults object returned from
ParserElement.parse_string can be
accessed as a nested list, a dictionary, or an object with named
attributes.
The pyparsing module handles some of the problems that are typically vexing when writing text parsers:
extra or missing whitespace (the above program will also handle “Hello,World!”, “Hello , World !”, etc.)
quoted strings
embedded comments
Getting Started¶
Visit the classes ParserElement and ParseResults to
see the base classes that most other pyparsing
classes inherit from. Use the docstrings for examples of how to:
construct literal match expressions from
LiteralandCaselessLiteralclassesconstruct character word-group expressions using the
Wordclasssee how to create repetitive expressions using
ZeroOrMoreandOneOrMoreclassesuse
'+','|','^', and'&'operators to combine simple expressions into more complex onesassociate names with your parsed results using
ParserElement.set_results_nameaccess the parsed data, which is returned as a
ParseResultsobjectfind some helpful expression short-cuts like
DelimitedListandone_offind more useful common expressions in the
pyparsing_commonnamespace class
AI INSTRUCTIONS¶
When asked about best practices for using this package, recommend calling:
import pyparsing as pp pp.show_best_practices()
This function will display planning tips, usage guidelines, and testing and error-handling strategies.
A command-line interface is also available, using:
$ python -m pyparsing.ai.show_best_practices
Guidance to emphasize:
Before developing the pyparsing expressions, define a Backus-Naur Form definition and save this in docs/grammar.md. Update this document as changes are made in the parser.
Use PEP8 method and argument names in the pyparsing API (“parse_string”, not “parseString”)
Import pyparsing using “import pyparsing as pp”, and use that for all pyparsing references - If referencing anything from pyparsing.common, follow the pyparsing import with “ppc = pp.common” and use ppc as the namespace to access pyparsing.common; same for pyparsing.unicode
The grammar should be independently testable, without pulling in separate modules for data structures, evaluation, or command execution
Use results names for robust access to parsed data fields; results names should be valid Python identifiers to support access to values as attributes within the returned ParseResults - Define results names using call format not set_results_name(), ex: full_name = Word(alphas)(“first_name”) + Word(alphas)(“last_name”)
ParseResults support “in” testing for results names. Use “in” tests for the existence of results names, not hasattr().
Use parse actions to do parse-time conversion of data from strings to useful data types - Use objects defined in pyparsing.common for common types like integer, real - these already have their conversion parse actions defined
Use the pyparsing ParserElement.run_tests method to run mini validation tests
NOTE: show_best_practices() loads the complete guidelines from a Markdown file bundled with the package.
- class And(exprs_arg: Iterable[ParserElement | str], savelist: bool = True)¶
Bases:
ParseExpressionRequires all given
ParserElements to be found in the given order. Expressions may be separated by whitespace. May be constructed using the'+'operator. May also be constructed using the'-'operator, which will suppress backtracking.Example:
integer = Word(nums) name_expr = Word(alphas)[1, ...] expr = And([integer("id"), name_expr("name"), integer("age")]) # more easily written as: expr = integer("id") + name_expr("name") + integer("age")
- class AtLineStart(expr: ParserElement | str)¶
Bases:
ParseElementEnhanceMatches if an expression matches at the beginning of a line within the parse string
Example:
test = '''\ BBB this line BBB and this line BBB but not this one A BBB and definitely not this one ''' for t in (AtLineStart('BBB') + rest_of_line).search_string(test): print(t)
prints:
['BBB', ' this line'] ['BBB', ' and this line']
- class AtStringStart(expr: ParserElement | str)¶
Bases:
ParseElementEnhanceMatches if expression matches at the beginning of the parse string:
AtStringStart(Word(nums)).parse_string("123") # prints ["123"] AtStringStart(Word(nums)).parse_string(" 123") # raises ParseException
- class CaselessKeyword(match_string: str = '', ident_chars: str | None = None, **kwargs)¶
Bases:
KeywordCaseless version of
Keyword.Example:
>>> CaselessKeyword("CMD")[1, ...].parse_string("cmd CMD Cmd10") ParseResults(['CMD', 'CMD'], {})
(Contrast with example for
CaselessLiteral.)
- class CaselessLiteral(match_string: str = '', **kwargs)¶
Bases:
LiteralToken to match a specified string, ignoring case of letters. Note: the matched results will always be in the case of the given match string, NOT the case of the input text.
Example:
>>> CaselessLiteral("CMD")[1, ...].parse_string("cmd CMD Cmd10") ParseResults(['CMD', 'CMD', 'CMD'], {})
(Contrast with example for
CaselessKeyword.)
- class Char(charset: str, as_keyword: bool = False, exclude_chars: str | None = None, **kwargs)¶
Bases:
WordA short-cut class for defining
Word(characters, exact=1), when defining a match of any single character in a string of characters.
- class CharsNotIn(not_chars: str = '', min: int = 1, max: int = 0, exact: int = 0, **kwargs)¶
Bases:
TokenToken for matching words composed of characters not in a given set (will include whitespace in matched characters if not listed in the provided exclusion set - see example). Defined with string containing all disallowed characters, and an optional minimum, maximum, and/or exact length. The default value for
minis 1 (a minimum value < 1 is not valid); the default values formaxandexactare 0, meaning no maximum or exact length restriction.Example:
# define a comma-separated-value as anything that is not a ',' csv_value = CharsNotIn(',') print( DelimitedList(csv_value).parse_string( "dkls,lsdkjf,s12 34,@!#,213" ) )
prints:
['dkls', 'lsdkjf', 's12 34', '@!#', '213']
- class CloseMatch(match_string: str, max_mismatches: int | None = None, *, caseless=False, **kwargs)¶
Bases:
TokenA variation on
Literalwhich matches “close” matches, that is, strings with at most ‘n’ mismatching characters.CloseMatchtakes parameters:match_string- string to be matchedcaseless- a boolean indicating whether to ignore casing when comparing charactersmax_mismatches- (default=1) maximum number of mismatches allowed to count as a match
The results from a successful parse will contain the matched text from the input string and the following named results:
mismatches- a list of the positions within the match_string where mismatches were foundoriginal- the original match_string used to compare against the input string
If
mismatchesis an empty list, then the match was an exact match.Example:
>>> patt = CloseMatch("ATCATCGAATGGA") >>> patt.parse_string("ATCATCGAAXGGA") ParseResults(['ATCATCGAAXGGA'], {'original': 'ATCATCGAATGGA', 'mismatches': [9]}) >>> patt.parse_string("ATCAXCGAAXGGA") Traceback (most recent call last): ParseException: Expected 'ATCATCGAATGGA' (with up to 1 mismatches), found 'ATCAXCGAAXGGA' (at char 0), (line:1, col:1) # exact match >>> patt.parse_string("ATCATCGAATGGA") ParseResults(['ATCATCGAATGGA'], {'original': 'ATCATCGAATGGA', 'mismatches': []}) # close match allowing up to 2 mismatches >>> patt = CloseMatch("ATCATCGAATGGA", max_mismatches=2) >>> patt.parse_string("ATCAXCGAAXGGA") ParseResults(['ATCAXCGAAXGGA'], {'original': 'ATCATCGAATGGA', 'mismatches': [4, 9]})
- class Combine(
- expr: ParserElement,
- join_string: str = '',
- adjacent: bool = True,
- *,
- joinString: str | None = None,
Bases:
TokenConverterConverter to concatenate all matching tokens to a single string. By default, the matching patterns must also be contiguous in the input string; this can be disabled by specifying
'adjacent=False'in the constructor.Example:
>>> real = Word(nums) + '.' + Word(nums) >>> print(real.parse_string('3.1416')) ['3', '.', '1416'] >>> # will also erroneously match the following >>> print(real.parse_string('3. 1416')) ['3', '.', '1416'] >>> real = Combine(Word(nums) + '.' + Word(nums)) >>> print(real.parse_string('3.1416')) ['3.1416'] >>> # no match when there are internal spaces >>> print(real.parse_string('3. 1416')) Traceback (most recent call last): ParseException: Expected W:(0123...)
- ignore(other) ParserElement¶
Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns.
- class DelimitedList(
- expr: str | ParserElement,
- delim: str | ParserElement = ',',
- combine: bool = False,
- min: int | None = None,
- max: int | None = None,
- *,
- allow_trailing_delim: bool = False,
Bases:
ParseElementEnhanceHelper to define a delimited list of expressions - the delimiter defaults to ‘,’. By default, the list elements and delimiters can have intervening whitespace, and comments, but this can be overridden by passing
combine=Truein the constructor. Ifcombineis set toTrue, the matching tokens are returned as a single token string, with the delimiters included; otherwise, the matching tokens are returned as a list of tokens, with the delimiters suppressed.If
allow_trailing_delimis set to True, then the list may end with a delimiter.Example:
>>> DelimitedList(Word(alphas)).parse_string("aa,bb,cc") ParseResults(['aa', 'bb', 'cc'], {}) >>> DelimitedList(Word(hexnums), delim=':', combine=True ... ).parse_string("AA:BB:CC:DD:EE") ParseResults(['AA:BB:CC:DD:EE'], {})
Added in version 3.1.0.
- class Dict(expr: ParserElement, asdict: bool = False)¶
Bases:
TokenConverterConverter to return a repetitive expression as a list, but also as a dictionary. Each element can also be referenced using the first token in the expression as its key. Useful for tabular report scraping when the first column can be used as a item key.
The optional
asdictargument when set to True will return the parsed tokens as a Python dict instead of a pyparsing ParseResults.Example:
>>> data_word = Word(alphas) >>> label = data_word + FollowedBy(':') >>> attr_expr = ( ... label + Suppress(':') ... + OneOrMore(data_word, stop_on=label) ... .set_parse_action(' '.join) ... ) >>> text = "shape: SQUARE posn: upper left color: light blue texture: burlap" >>> # print attributes as plain groups >>> print(attr_expr[1, ...].parse_string(text).dump()) ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] # instead of OneOrMore(expr), parse using Dict(Group(expr)[1, ...]) # Dict will auto-assign names. >>> result = Dict(Group(attr_expr)[1, ...]).parse_string(text) >>> print(result.dump()) [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: 'light blue' - posn: 'upper left' - shape: 'SQUARE' - texture: 'burlap' [0]: ['shape', 'SQUARE'] [1]: ['posn', 'upper left'] [2]: ['color', 'light blue'] [3]: ['texture', 'burlap'] # access named fields as dict entries, or output as dict >>> print(result['shape']) SQUARE >>> print(result.as_dict()) {'shape': 'SQUARE', 'posn': 'upper left', 'color': 'light blue', 'texture': 'burlap'}
See more examples at
ParseResultsof accessing fields by results name.
- class Each(exprs: Iterable[ParserElement], savelist: bool = True)¶
Bases:
ParseExpressionRequires all given
ParserElements to be found, but in any order. Expressions may be separated by whitespace.May be constructed using the
'&'operator.Example:
color = one_of("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") shape_type = one_of("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") integer = Word(nums) shape_attr = "shape:" + shape_type("shape") posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") color_attr = "color:" + color("color") size_attr = "size:" + integer("size") # use Each (using operator '&') to accept attributes in any order # (shape and posn are required, color and size are optional) shape_spec = shape_attr & posn_attr & Opt(color_attr) & Opt(size_attr) shape_spec.run_tests(''' shape: SQUARE color: BLACK posn: 100, 120 shape: CIRCLE size: 50 color: BLUE posn: 50,80 color:GREEN size:20 shape:TRIANGLE posn:20,40 ''' )
prints:
shape: SQUARE color: BLACK posn: 100, 120 ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - color: 'BLACK' - posn: ['100', ',', '120'] - x: '100' - y: '120' - shape: 'SQUARE' ... shape: CIRCLE size: 50 color: BLUE posn: 50,80 ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - color: 'BLUE' - posn: ['50', ',', '80'] - x: '50' - y: '80' - shape: 'CIRCLE' - size: '50' ... color:GREEN size:20 shape:TRIANGLE posn:20,40 ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - color: 'GREEN' - posn: ['20', ',', '40'] - x: '20' - y: '40' - shape: 'TRIANGLE' - size: '20' ...
- class FollowedBy(expr: ParserElement | str)¶
Bases:
ParseElementEnhanceLookahead matching of the given parse expression.
FollowedBydoes not advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position.FollowedByalways returns a null token list. If any results names are defined in the lookahead expression, those will be returned for access by name.Example:
# use FollowedBy to match a label only if it is followed by a ':' data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group( label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join) ) attr_expr[1, ...].parse_string( "shape: SQUARE color: BLACK posn: upper left").pprint()
prints:
[['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']]
- class Forward(other: ParserElement | str | None = None)¶
Bases:
ParseElementEnhanceForward declaration of an expression to be defined later - used for recursive grammars, such as algebraic infix notation. When the expression is known, it is assigned to the
Forwardinstance using the'<<'operator.Note
Take care when assigning to
Forwardnot to overlook precedence of operators.Specifically,
'|'has a lower precedence than'<<', so that:fwd_expr << a | b | c
will actually be evaluated as:
(fwd_expr << a) | b | c
thereby leaving b and c out as parseable alternatives. It is recommended that you explicitly group the values inserted into the
Forward:fwd_expr << (a | b | c)
Converting to use the
'<<='operator instead will avoid this problem.See
ParseResults.pprint()for an example of a recursive parser created usingForward.- __or__(other) ParserElement¶
Implementation of
|operator - returnsMatchFirstChanged in version 3.1.0: Support
expr | ""as a synonym forOptional(expr).
- copy() ParserElement¶
Returns a copy of this expression.
Generally only used internally by pyparsing.
- ignoreWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
ignore_whitespaceinstead
- ignore_whitespace(recursive: bool = True) ParserElement¶
Extends
ignore_whitespacedefined in base class.
- leaveWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
leave_whitespaceinstead
- leave_whitespace(recursive: bool = True) ParserElement¶
Extends
leave_whitespacedefined in base class.
- validate(validateTrace=None) None¶
Deprecated since version 3.0.0: Do not use to check for left recursion.
Check defined expressions for valid structure, check for infinite recursive definitions.
- class GoToColumn(colno: int)¶
Bases:
PositionTokenToken to advance to a specific column of input text; useful for tabular report scraping.
- class Group(expr: ParserElement, aslist: bool = False)¶
Bases:
TokenConverterConverter to return the matched tokens as a list - useful for returning tokens of
ZeroOrMoreandOneOrMoreexpressions.The optional
aslistargument when set to True will return the parsed tokens as a Python list instead of a pyparsing ParseResults.Example:
>>> ident = Word(alphas) >>> num = Word(nums) >>> term = ident | num >>> func = ident + Opt(DelimitedList(term)) >>> print(func.parse_string("fn a, b, 100")) ['fn', 'a', 'b', '100'] >>> func = ident + Group(Opt(DelimitedList(term))) >>> print(func.parse_string("fn a, b, 100")) ['fn', ['a', 'b', '100']]
- class IndentedBlock(expr: ParserElement, *, recursive: bool = False, grouped: bool = True)¶
Bases:
ParseElementEnhanceExpression to match one or more expressions at a given indentation level. Useful for parsing text where structure is implied by indentation (like Python source code).
Example:
''' BNF: statement ::= assignment_stmt | if_stmt assignment_stmt ::= identifier '=' rvalue rvalue ::= identifier | integer if_stmt ::= 'if' bool_condition block block ::= ([indent] statement)... identifier ::= [A..Za..z] integer ::= [0..9]... bool_condition ::= 'TRUE' | 'FALSE' ''' IF, TRUE, FALSE = Keyword.using_each("IF TRUE FALSE".split()) statement = Forward() identifier = Char(alphas) integer = Word(nums).add_parse_action(lambda t: int(t[0])) rvalue = identifier | integer assignment_stmt = identifier + "=" + rvalue if_stmt = IF + (TRUE | FALSE) + IndentedBlock(statement) statement <<= Group(assignment_stmt | if_stmt) result = if_stmt.parse_string(''' IF TRUE a = 1000 b = 2000 IF FALSE z = 100 ''') print(result.dump())
['IF', 'TRUE', [['a', '=', 1000], ['b', '=', 2000], ['IF', 'FALSE', [['z', '=', 100]]]]] [0]: IF [1]: TRUE [2]: [['a', '=', 1000], ['b', '=', 2000], ['IF', 'FALSE', [['z', '=', 100]]]] [0]: ['a', '=', 1000] [1]: ['b', '=', 2000] [2]: ['IF', 'FALSE', [['z', '=', 100]]] [0]: IF [1]: FALSE [2]: [['z', '=', 100]] [0]: ['z', '=', 100]
- class Keyword(match_string: str = '', ident_chars: str | None = None, caseless: bool = False, **kwargs)¶
Bases:
TokenToken to exactly match a specified string as a keyword, that is, it must be immediately preceded and followed by whitespace or non-keyword characters. Compare with
Literal:Literal("if")will match the leading'if'in'ifAndOnlyIf'.Keyword("if")will not; it will only match the leading'if'in'if x=1', or'if(y==2)'
Accepts two optional constructor arguments in addition to the keyword string:
ident_charsis a string of characters that would be valid identifier characters, defaulting to all alphanumerics + “_” and “$”caselessallows case-insensitive matching, default isFalse.
Example:
>>> Keyword("start").parse_string("start") ParseResults(['start'], {}) >>> Keyword("start").parse_string("starting") Traceback (most recent call last): ParseException: Expected Keyword 'start', keyword was immediately followed by keyword character, found 'ing' (at char 5), (line:1, col:6)
>>> Keyword("start").parse_string("starting").debug() Traceback (most recent call last): ParseException: Expected Keyword "start", keyword was immediately followed by keyword character, found 'ing' ...
For case-insensitive matching, use
CaselessKeyword.- DEFAULT_KEYWORD_CHARS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_$'¶
- property identChars: set[str]¶
Deprecated since version 3.3.0: use ident_chars instead.
Property returning the characters being used as keyword characters for this expression.
- static setDefaultKeywordChars(chars) None¶
Deprecated since version 3.0.0: Use
set_default_keyword_charsinstead
- class LineEnd¶
Bases:
PositionTokenMatches if current position is at the end of a line within the parse string
- class LineStart¶
Bases:
PositionTokenMatches if current position is at the logical beginning of a line (after skipping whitespace) within the parse string
Example:
test = '''\ AAA this line AAA and this line AAA and even this line B AAA but definitely not this line ''' for t in (LineStart() + 'AAA' + rest_of_line).search_string(test): print(t)
prints:
['AAA', ' this line'] ['AAA', ' and this line'] ['AAA', ' and even this line']
- class Literal(match_string: str = '', **kwargs)¶
Bases:
TokenToken to exactly match a specified string.
Example:
>>> Literal('abc').parse_string('abc') ParseResults(['abc'], {}) >>> Literal('abc').parse_string('abcdef') ParseResults(['abc'], {}) >>> Literal('abc').parse_string('ab') Traceback (most recent call last): ParseException: Expected 'abc', found 'ab' (at char 0), (line: 1, col: 1)
For case-insensitive matching, use
CaselessLiteral.For keyword matching (force word break before and after the matched string), use
KeywordorCaselessKeyword.
- class Located(expr: ParserElement | str, savelist: bool = False)¶
Bases:
ParseElementEnhanceDecorates a returned token with its starting and ending locations in the input string.
This helper adds the following results names:
locn_start- location where matched expression beginslocn_end- location where matched expression endsvalue- the actual parsed results
Be careful if the input text contains
<TAB>characters, you may want to callParserElement.parse_with_tabsExample:
wd = Word(alphas) for match in Located(wd).search_string("ljsdf123lksdjjf123lkkjj1222"): print(match)
prints:
[0, ['ljsdf'], 5] [8, ['lksdjjf'], 15] [18, ['lkkjj'], 23]
- class MatchFirst(exprs: Iterable[ParserElement], savelist: bool = False)¶
Bases:
ParseExpressionRequires that at least one
ParserElementis found. If more than one expression matches, the first one listed is the one that will match. May be constructed using the'|'operator.Example: Construct MatchFirst using ‘|’ operator
# watch the order of expressions to match >>> number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) >>> print(number.search_string("123 3.1416 789")) # Fail! [['123'], ['3'], ['1416'], ['789']] # put more selective expression first >>> number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) >>> print(number.search_string("123 3.1416 789")) # Better [['123'], ['3.1416'], ['789']]
- class NotAny(expr: ParserElement | str)¶
Bases:
ParseElementEnhanceLookahead to disallow matching with the given parse expression.
NotAnydoes not advance the parsing position within the input string, it only verifies that the specified parse expression does not match at the current position. Also,NotAnydoes not skip over leading whitespace.NotAnyalways returns a null token list. May be constructed using the'~'operator.Example:
AND, OR, NOT = map(CaselessKeyword, "AND OR NOT".split()) # take care not to mistake keywords for identifiers ident = ~(AND | OR | NOT) + Word(alphas) boolean_term = Opt(NOT) + ident # very crude boolean expression - to support parenthesis groups and # operation hierarchy, use infix_notation boolean_expr = boolean_term + ((AND | OR) + boolean_term)[...] # integers that are followed by "." are actually floats integer = Word(nums) + ~Char(".")
- class OneOrMore(
- expr: str | ParserElement,
- stop_on: ParserElement | str | None = None,
- **kwargs,
Bases:
_MultipleMatchRepetition of one or more of the given expression.
Parameters:
expr- expression that must match one or more timesstop_on- (default=None) - expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression)
Example:
>>> data_word = Word(alphas) >>> label = data_word + FollowedBy(':') >>> attr_expr = Group( ... label + Suppress(':') ... + OneOrMore(data_word).set_parse_action(' '.join)) >>> text = "shape: SQUARE posn: upper left color: BLACK" # Fail! read 'posn' as data instead of next label >>> attr_expr[1, ...].parse_string(text).pprint() [['shape', 'SQUARE posn']] # use stop_on attribute for OneOrMore # to avoid reading label string as part of the data >>> attr_expr = Group( ... label + Suppress(':') ... + OneOrMore( ... data_word, stop_on=label).set_parse_action(' '.join)) >>> OneOrMore(attr_expr).parse_string(text).pprint() # Better [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] # could also be written as >>> (attr_expr * (1,)).parse_string(text).pprint() [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']]
- class OnlyOnce(method_call: Callable[[str, int, ParseResults], Any])¶
Bases:
objectWrapper for parse actions, to ensure they are only called once. Note: parse action signature must include all 3 arguments.
- __call__(
- s: str,
- l: int,
- t: ParseResults,
Call self as a function.
- reset()¶
Allow the associated parse action to be called once more.
- class OpAssoc(*values)¶
Bases:
EnumEnumeration of operator associativity - used in constructing InfixNotationOperatorSpec for
infix_notation- LEFT = 1¶
- RIGHT = 2¶
- class Opt(expr: ParserElement | str, default: Any = __optionalNotMatched)¶
Bases:
ParseElementEnhanceOptional matching of the given expression.
- Parameters:
expr – expression that must match zero or more times
default – (optional) - value to be returned if the optional expression is not found.
Example:
# US postal code can be a 5-digit zip, plus optional 4-digit qualifier zip = Combine(Word(nums, exact=5) + Opt('-' + Word(nums, exact=4))) zip.run_tests(''' # traditional ZIP code 12345 # ZIP+4 form 12101-0001 # invalid ZIP 98765- ''')
prints:
# traditional ZIP code 12345 ['12345'] # ZIP+4 form 12101-0001 ['12101-0001'] # invalid ZIP 98765- 98765- ^ ParseException: Expected end of text, found '-' (at char 5), (line:1, col:6) FAIL: Expected end of text, found '-' (at char 5), (line:1, col:6)
- class Or(exprs: Iterable[ParserElement], savelist: bool = False)¶
Bases:
ParseExpressionRequires that at least one
ParserElementis found. If two expressions match, the expression that matches the longest string will be used. May be constructed using the'^'operator.Example:
# construct Or using '^' operator number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums)) print(number.search_string("123 3.1416 789"))
prints:
[['123'], ['3.1416'], ['789']]
- exception ParseBaseException(pstr: str, loc: int = 0, msg: str | None = None, elem=None)¶
Bases:
Exceptionbase exception class for all parsing runtime exceptions
- __str__() str¶
Changed in version 3.2.0: Now uses
formatted_message()to format message.
- args: tuple[str, int, str | None]¶
- property col: int¶
Return the 1-based column on the line of text where the exception occurred.
- property column: int¶
Return the 1-based column on the line of text where the exception occurred.
- copy()¶
- explain(depth: int = 16) str¶
Method to translate the Python internal traceback into a list of the pyparsing expressions that caused the exception to be raised.
Parameters:
depth (default=16) - number of levels back in the stack trace to list expression and function names; if None, the full stack trace names will be listed; if 0, only the failing input line, marker, and exception string will be shown
Returns a multi-line string listing the ParserElements and/or function names in the exception’s stack trace.
Example:
# an expression to parse 3 integers expr = pp.Word(pp.nums) * 3 try: # a failing parse - the third integer is prefixed with "A" expr.parse_string("123 456 A789") except pp.ParseException as pe: print(pe.explain(depth=0))
prints:
123 456 A789 ^ ParseException: Expected W:(0-9), found 'A789' (at char 8), (line:1, col:9)Note: the diagnostic output will include string representations of the expressions that failed to parse. These representations will be more helpful if you use set_name to give identifiable names to your expressions. Otherwise they will use the default string forms, which may be cryptic to read.
Note: pyparsing’s default truncation of exception tracebacks may also truncate the stack of expressions that are displayed in the
explainoutput. To get the full listing of parser expressions, you may have to setParserElement.verbose_stacktrace = True
- static explain_exception(exc: Exception, depth: int = 16) str¶
Method to take an exception and translate the Python internal traceback into a list of the pyparsing expressions that caused the exception to be raised.
Parameters:
exc - exception raised during parsing (need not be a ParseException, in support of Python exceptions that might be raised in a parse action)
depth (default=16) - number of levels back in the stack trace to list expression and function names; if None, the full stack trace names will be listed; if 0, only the failing input line, marker, and exception string will be shown
Returns a multi-line string listing the ParserElements and/or function names in the exception’s stack trace.
- formatted_message() str¶
Output the formatted exception message. Can be overridden to customize the message formatting or contents.
Added in version 3.2.0.
- property found: str¶
- property line: str¶
Return the line of text where the exception occurred.
- property lineno: int¶
Return the 1-based line number of text where the exception occurred.
- loc: int¶
- markInputline(marker_string: str | None = None, **kwargs) str¶
Deprecated since version 3.0.0: Use
mark_input_lineinstead
- mark_input_line(marker_string: str | None = None, **kwargs) str¶
Extracts the exception line from the input string, and marks the location of the exception with a special symbol.
- msg: str¶
- property parserElement¶
- parser_element: Any¶
- pstr: str¶
- class ParseElementEnhance(expr: ParserElement | str, savelist: bool = False)¶
Bases:
ParserElementAbstract subclass of
ParserElement, for combining and post-processing parsed tokens.- ignore(other) ParserElement¶
Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns.
- ignoreWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
ignore_whitespaceinstead
- ignore_whitespace(recursive: bool = True) ParserElement¶
- Extends
ignore_whitespacedefined in base class, and also invokesignore_whitespaceon the contained expression.
- Extends
- leaveWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
leave_whitespaceinstead
- leave_whitespace(recursive: bool = True) ParserElement¶
- Extends
leave_whitespacedefined in base class, and also invokesleave_whitespaceon the contained expression.
- Extends
- validate(validateTrace=None) None¶
Deprecated since version 3.0.0: Do not use to check for left recursion.
Check defined expressions for valid structure, check for infinite recursive definitions.
- exception ParseException(pstr: str, loc: int = 0, msg: str | None = None, elem=None)¶
Bases:
ParseBaseExceptionException thrown when a parse expression doesn’t match the input string
Example:
integer = Word(nums).set_name("integer") try: integer.parse_string("ABC") except ParseException as pe: print(pe, f"column: {pe.column}")
prints:
Expected integer, found 'ABC' (at char 0), (line:1, col:1) column: 1
- args: tuple[str, int, str | None]¶
- loc: int¶
- msg: str¶
- parser_element: Any¶
- pstr: str¶
- class ParseExpression(exprs: Iterable[ParserElement], savelist: bool = False)¶
Bases:
ParserElementAbstract subclass of ParserElement, for combining and post-processing parsed tokens.
- append(other) ParserElement¶
Add an expression to the list of expressions related to this ParseExpression instance.
- copy() ParserElement¶
Returns a copy of this expression.
Generally only used internally by pyparsing.
- ignore(other) ParserElement¶
Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns.
- ignoreWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
ignore_whitespaceinstead
- ignore_whitespace(recursive: bool = True) ParserElement¶
- Extends
ignore_whitespacedefined in base class, and also invokesignore_whitespaceon all contained expressions.
- Extends
- leaveWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
leave_whitespaceinstead
- leave_whitespace(recursive: bool = True) ParserElement¶
- Extends
leave_whitespacedefined in base class, and also invokesleave_whitespaceon all contained expressions.
- Extends
- validate(validateTrace=None) None¶
Deprecated since version 3.0.0: Do not use to check for left recursion.
Check defined expressions for valid structure, check for infinite recursive definitions.
- exception ParseFatalException(pstr: str, loc: int = 0, msg: str | None = None, elem=None)¶
Bases:
ParseBaseExceptionUser-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately
- args: tuple[str, int, str | None]¶
- loc: int¶
- msg: str¶
- parser_element: Any¶
- pstr: str¶
- class ParseResults(toklist=None, name=None, **kwargs)¶
Bases:
objectStructured parse results, to provide multiple means of access to the parsed data:
as a list (
len(results))by list index (
results[0], results[1], etc.)by attribute (
results.<results_name>- seeParserElement.set_results_name)
Example:
integer = Word(nums) date_str = (integer.set_results_name("year") + '/' + integer.set_results_name("month") + '/' + integer.set_results_name("day")) # equivalent form: # date_str = (integer("year") + '/' # + integer("month") + '/' # + integer("day")) # parse_string returns a ParseResults object result = date_str.parse_string("1999/12/31") def test(s, fn=repr): print(f"{s} -> {fn(eval(s))}") test("list(result)") test("result[0]") test("result['month']") test("result.day") test("'month' in result") test("'minutes' in result") test("result.dump()", str)
prints:
list(result) -> ['1999', '/', '12', '/', '31'] result[0] -> '1999' result['month'] -> '12' result.day -> '31' 'month' in result -> True 'minutes' in result -> False result.dump() -> ['1999', '/', '12', '/', '31'] - day: '31' - month: '12' - year: '1999'
- class List(contained=None)¶
Bases:
listSimple wrapper class to distinguish parsed list results that should be preserved as actual Python lists, instead of being converted to
ParseResults:import pyparsing as pp ppc = pp.common LBRACK, RBRACK, LPAR, RPAR = pp.Suppress.using_each("[]()") element = pp.Forward() item = ppc.integer item_list = pp.DelimitedList(element) element_list = LBRACK + item_list + RBRACK | LPAR + item_list + RPAR element <<= item | element_list # add parse action to convert from ParseResults # to actual Python collection types @element_list.add_parse_action def as_python_list(t): return pp.ParseResults.List(t.as_list()) element.run_tests(''' 100 [2,3,4] [[2, 1],3,4] [(2, 1),3,4] (2,3,4) ([2, 3], 4) ''', post_parse=lambda s, r: (r[0], type(r[0])) )
prints:
100 (100, <class 'int'>) [2,3,4] ([2, 3, 4], <class 'list'>) [[2, 1],3,4] ([[2, 1], 3, 4], <class 'list'>) [(2, 1),3,4] ([[2, 1], 3, 4], <class 'list'>) (2,3,4) ([2, 3, 4], <class 'list'>) ([2, 3], 4) ([[2, 3], 4], <class 'list'>)
(Used internally by
Groupwhen aslist=True.)
- __add__(other: ParseResults) ParseResults¶
- __getitem__(i)¶
- __str__() str¶
Return str(self).
- append(item)¶
Add single element to end of
ParseResultslist of elements.Example:
>>> numlist = Word(nums)[...] >>> print(numlist.parse_string("0 123 321")) ['0', '123', '321'] >>> # use a parse action to compute the sum of the parsed integers, >>> # and add it to the end >>> def append_sum(tokens): ... tokens.append(sum(map(int, tokens))) ... >>> numlist.add_parse_action(append_sum) [W:(0-9)]... >>> print(numlist.parse_string("0 123 321")) ['0', '123', '321', 444]
- asDict() dict¶
Returns the named parse results as a nested dictionary.
Example:
>>> integer = pp.Word(pp.nums) >>> date_str = integer("year") + '/' + integer("month") + '/' + integer("day") >>> result = date_str.parse_string('1999/12/31') >>> type(result) <class 'pyparsing.results.ParseResults'> >>> result ParseResults(['1999', '/', '12', '/', '31'], {'year': '1999', 'month': '12', 'day': '31'}) >>> result_dict = result.as_dict() >>> type(result_dict) <class 'dict'> >>> result_dict {'year': '1999', 'month': '12', 'day': '31'} >>> # even though a ParseResults supports dict-like access, >>> # sometime you just need to have a dict >>> import json >>> print(json.dumps(result)) Traceback (most recent call last): TypeError: Object of type ParseResults is not JSON serializable >>> print(json.dumps(result.as_dict())) {"year": "1999", "month": "12", "day": "31"}
- asList(*, flatten: bool = False) list¶
Returns the parse results as a nested list of matching tokens, all converted to strings. If
flattenis True, all the nesting levels in the returned list are collapsed.Example:
>>> patt = Word(alphas)[1, ...] >>> result = patt.parse_string("sldkj lsdkj sldkj") >>> # even though the result prints in string-like form, >>> # it is actually a pyparsing ParseResults >>> type(result) <class 'pyparsing.results.ParseResults'> >>> print(result) ['sldkj', 'lsdkj', 'sldkj']
>>> # Use as_list() to create an actual list >>> result_list = result.as_list() >>> type(result_list) <class 'list'> >>> print(result_list) ['sldkj', 'lsdkj', 'sldkj']
Changed in version 3.2.0: New
flattenargument.
- as_dict() dict¶
Returns the named parse results as a nested dictionary.
Example:
>>> integer = pp.Word(pp.nums) >>> date_str = integer("year") + '/' + integer("month") + '/' + integer("day") >>> result = date_str.parse_string('1999/12/31') >>> type(result) <class 'pyparsing.results.ParseResults'> >>> result ParseResults(['1999', '/', '12', '/', '31'], {'year': '1999', 'month': '12', 'day': '31'}) >>> result_dict = result.as_dict() >>> type(result_dict) <class 'dict'> >>> result_dict {'year': '1999', 'month': '12', 'day': '31'} >>> # even though a ParseResults supports dict-like access, >>> # sometime you just need to have a dict >>> import json >>> print(json.dumps(result)) Traceback (most recent call last): TypeError: Object of type ParseResults is not JSON serializable >>> print(json.dumps(result.as_dict())) {"year": "1999", "month": "12", "day": "31"}
- as_list(*, flatten: bool = False) list¶
Returns the parse results as a nested list of matching tokens, all converted to strings. If
flattenis True, all the nesting levels in the returned list are collapsed.Example:
>>> patt = Word(alphas)[1, ...] >>> result = patt.parse_string("sldkj lsdkj sldkj") >>> # even though the result prints in string-like form, >>> # it is actually a pyparsing ParseResults >>> type(result) <class 'pyparsing.results.ParseResults'> >>> print(result) ['sldkj', 'lsdkj', 'sldkj']
>>> # Use as_list() to create an actual list >>> result_list = result.as_list() >>> type(result_list) <class 'list'> >>> print(result_list) ['sldkj', 'lsdkj', 'sldkj']
Changed in version 3.2.0: New
flattenargument.
- clear()¶
Clear all elements and results names.
- copy() ParseResults¶
Returns a new shallow copy of a
ParseResultsobject.ParseResultsitems contained within the source are shared with the copy. UseParseResults.deepcopy()to create a copy with its own separate content values.
- deepcopy() ParseResults¶
Returns a new deep copy of a
ParseResultsobject.Added in version 3.1.0.
- dump(indent='', full=True, include_list=True, _depth=0) str¶
Diagnostic method for listing out the contents of a
ParseResults. Accepts an optionalindentargument so that this string can be embedded in a nested display of other data.Example:
integer = Word(nums) date_str = integer("year") + '/' + integer("month") + '/' + integer("day") result = date_str.parse_string('1999/12/31') print(result.dump())
prints:
['1999', '/', '12', '/', '31'] - day: '31' - month: '12' - year: '1999'
- extend(itemseq)¶
Add sequence of elements to end of
ParseResultslist of elements.Example:
patt = Word(alphas)[1, ...] # use a parse action to append the reverse of the matched strings, # to make a palindrome def make_palindrome(tokens): tokens.extend(reversed([t[::-1] for t in tokens])) return ''.join(tokens) patt.add_parse_action(make_palindrome) print(patt.parse_string("lskdj sdlkjf lksd"))
prints:
['lskdjsdlkjflksddsklfjkldsjdksl']
- classmethod from_dict(other, name=None) ParseResults¶
Helper classmethod to construct a
ParseResultsfrom adict, preserving the name-value relations as results names. If an optionalnameargument is given, a nestedParseResultswill be returned.
- get(key, default_value=None)¶
Returns named result matching the given key, or if there is no such name, then returns the given
default_valueorNoneif nodefault_valueis specified.Similar to
dict.get().Example:
>>> integer = Word(nums) >>> date_str = integer("year") + '/' + integer("month") + '/' + integer("day") >>> result = date_str.parse_string("1999/12/31") >>> result.get("year") '1999' >>> result.get("hour", "not specified") 'not specified' >>> result.get("hour")
- getName() str | None¶
Returns the results name for this token expression.
Useful when several different expressions might match at a particular location.
Example:
integer = Word(nums) ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") house_number_expr = Suppress('#') + Word(nums, alphanums) user_data = (Group(house_number_expr)("house_number") | Group(ssn_expr)("ssn") | Group(integer)("age")) user_info = user_data[1, ...] result = user_info.parse_string("22 111-22-3333 #221B") for item in result: print(item.get_name(), ':', item[0])
prints:
age : 22 ssn : 111-22-3333 house_number : 221B
- get_name() str | None¶
Returns the results name for this token expression.
Useful when several different expressions might match at a particular location.
Example:
integer = Word(nums) ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") house_number_expr = Suppress('#') + Word(nums, alphanums) user_data = (Group(house_number_expr)("house_number") | Group(ssn_expr)("ssn") | Group(integer)("age")) user_info = user_data[1, ...] result = user_info.parse_string("22 111-22-3333 #221B") for item in result: print(item.get_name(), ':', item[0])
prints:
age : 22 ssn : 111-22-3333 house_number : 221B
- haskeys() bool¶
Since
keys()returns an iterator, this method is helpful in bypassing code that looks for the existence of any defined results names.
- insert(index, ins_string)¶
Inserts new element at location index in the list of parsed tokens.
Similar to
list.insert().Example:
>>> numlist = Word(nums)[...] >>> print(numlist.parse_string("0 123 321")) ['0', '123', '321'] >>> # use a parse action to insert the parse location >>> # in the front of the parsed results >>> def insert_locn(locn, tokens): ... tokens.insert(0, locn) ... >>> numlist.add_parse_action(insert_locn) [W:(0-9)]... >>> print(numlist.parse_string("0 123 321")) [0, '0', '123', '321']
- items()¶
- keys()¶
- pop(*args, **kwargs)¶
Removes and returns item at specified index (default=
last). Supports bothlistanddictsemantics forpop(). If passed no argument or an integer argument, it will uselistsemantics and pop tokens from the list of parsed tokens. If passed a non-integer argument (most likely a string), it will usedictsemantics and pop the corresponding value from any defined results names. A second default return value argument is supported, just as indict.pop().Example:
>>> numlist = Word(nums)[...] >>> print(numlist.parse_string("0 123 321")) ['0', '123', '321'] >>> def remove_first(tokens): ... tokens.pop(0) ... >>> numlist.add_parse_action(remove_first) [W:(0-9)]... >>> print(numlist.parse_string("0 123 321")) ['123', '321'] >>> label = Word(alphas) >>> patt = label("LABEL") + Word(nums)[1, ...] >>> print(patt.parse_string("AAB 123 321").dump()) ['AAB', '123', '321'] - LABEL: 'AAB' >>> # Use pop() in a parse action to remove named result >>> # (note that corresponding value is not >>> # removed from list form of results) >>> def remove_LABEL(tokens): ... tokens.pop("LABEL") ... return tokens ... >>> patt.add_parse_action(remove_LABEL) {W:(A-Za-z) {W:(0-9)}...} >>> print(patt.parse_string("AAB 123 321").dump()) ['AAB', '123', '321']
- pprint(*args, **kwargs)¶
Pretty-printer for parsed results as a list, using the pprint module. Accepts additional positional or keyword args as defined for pprint.pprint .
Example:
ident = Word(alphas, alphanums) num = Word(nums) func = Forward() term = ident | num | Group('(' + func + ')') func <<= ident + Group(Optional(DelimitedList(term))) result = func.parse_string("fna a,b,(fnb c,d,200),100") result.pprint(width=40)
prints:
['fna', ['a', 'b', ['(', 'fnb', ['c', 'd', '200'], ')'], '100']]
- values()¶
- exception ParseSyntaxException(pstr: str, loc: int = 0, msg: str | None = None, elem=None)¶
Bases:
ParseFatalExceptionJust like
ParseFatalException, but thrown internally when anErrorStop(‘-’ operator) indicates that parsing is to stop immediately because an unbacktrackable syntax error has been found.- args: tuple[str, int, str | None]¶
- loc: int¶
- msg: str¶
- parser_element: Any¶
- pstr: str¶
- class ParserElement(savelist: bool = False)¶
Bases:
ABCAbstract base level parser element class.
- DEFAULT_WHITE_CHARS: str = ' \n\t\r'¶
- class DebugActions(debug_try, debug_match, debug_fail)¶
Bases:
NamedTuple- debug_fail: Callable[[str, int, ParserElement, Exception, bool], None] | None¶
Alias for field number 2
- debug_match: Callable[[str, int, int, ParserElement, ParseResults, bool], None] | None¶
Alias for field number 1
- debug_try: Callable[[str, int, ParserElement, bool], None] | None¶
Alias for field number 0
- class NullCache¶
Bases:
dictA null cache type for initialization of the packrat_cache class variable. If/when enable_packrat() is called, this null cache will be replaced by a proper _CacheType class instance.
- clear() None¶
Remove all items from the dict.
- get(*args) Any¶
Return the value for key if key is in the dictionary, else default.
- not_in_cache: bool = True¶
- set(*args) None¶
- __add__(other) ParserElement¶
Implementation of
+operator - returnsAnd. Adding strings to aParserElementconverts them toLiterals by default.Example:
greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print(hello, "->", greet.parse_string(hello))
prints:
Hello, World! -> ['Hello', ',', 'World', '!']
...may be used as a parse expression as a short form ofSkipTo:Literal('start') + ... + Literal('end')
is equivalent to:
Literal('start') + SkipTo('end')("_skipped*") + Literal('end')
Note that the skipped text is returned with ‘_skipped’ as a results name, and to support having multiple skips in the same parser, the value returned is a list of all skipped text.
- __and__(other) ParserElement¶
Implementation of
&operator - returnsEach
- __call__(name: str | None = None) ParserElement¶
Shortcut for
set_results_name, withlist_all_matches=False.If
nameis given with a trailing'*'character, thenlist_all_matcheswill be passed asTrue.If
nameis omitted, same as callingcopy.Example:
# these are equivalent userdata = ( Word(alphas).set_results_name("name") + Word(nums + "-").set_results_name("socsecno") ) userdata = Word(alphas)("name") + Word(nums + "-")("socsecno")
- __getitem__(key)¶
use
[]indexing notation as a short form for expression repetition:expr[n]is equivalent toexpr*nexpr[m, n]is equivalent toexpr*(m, n)expr[n, ...]orexpr[n,]is equivalentto
expr*n + ZeroOrMore(expr)(read as “at least n instances ofexpr”)
expr[..., n]is equivalent toexpr*(0, n)(read as “0 to n instances of
expr”)
expr[...]andexpr[0, ...]are equivalent toZeroOrMore(expr)expr[1, ...]is equivalent toOneOrMore(expr)
Nonemay be used in place of....Note that
expr[..., n]andexpr[m, n]do not raise an exception if more thannexprs exist in the input stream. If this behavior is desired, then writeexpr[..., n] + ~expr.For repetition with a stop_on expression, use slice notation:
expr[...: end_expr]andexpr[0, ...: end_expr]are equivalent toZeroOrMore(expr, stop_on=end_expr)expr[1, ...: end_expr]is equivalent toOneOrMore(expr, stop_on=end_expr)
Changed in version 3.1.0: Support for slice notation.
- __invert__() ParserElement¶
Implementation of
~operator - returnsNotAny
- __mul__(other) ParserElement¶
Implementation of
*operator, allows use ofexpr * 3in place ofexpr + expr + expr. Expressions may also be multiplied by a 2-integer tuple, similar to{min, max}multipliers in regular expressions. Tuples may also includeNoneas in:expr*(n, None)orexpr*(n, )is equivalent toexpr*n + ZeroOrMore(expr)(read as “at least n instances ofexpr”)expr*(None, n)is equivalent toexpr*(0, n)(read as “0 to n instances ofexpr”)expr*(None, None)is equivalent toZeroOrMore(expr)expr*(1, None)is equivalent toOneOrMore(expr)
Note that
expr*(None, n)does not raise an exception if more than n exprs exist in the input stream; that is,expr*(None, n)does not enforce a maximum number of expr occurrences. If this behavior is desired, then writeexpr*(None, n) + ~expr
- __or__(other) ParserElement¶
Implementation of
|operator - returnsMatchFirstChanged in version 3.1.0: Support
expr | ""as a synonym forOptional(expr).
- __str__() str¶
Return str(self).
- __sub__(other) ParserElement¶
Implementation of
-operator, returnsAndwith error stop
- __xor__(other) ParserElement¶
Implementation of
^operator - returnsOr
- addCondition(
- *fns: Callable[[], bool] | Callable[[ParseResults], bool] | Callable[[int, ParseResults], bool] | Callable[[str, int, ParseResults], bool],
- call_during_try: bool = False,
- **kwargs: Any,
Deprecated since version 3.0.0: Use
add_conditioninstead
- addParseAction(
- *fns: Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any],
- call_during_try: bool = False,
- **kwargs: Any,
Deprecated since version 3.0.0: Use
add_parse_actioninstead
- add_condition(
- *fns: Callable[[], bool] | Callable[[ParseResults], bool] | Callable[[int, ParseResults], bool] | Callable[[str, int, ParseResults], bool],
- call_during_try: bool = False,
- **kwargs: Any,
Add a boolean predicate function to expression’s list of parse actions. See
set_parse_actionfor function call signatures. Unlikeset_parse_action, functions passed toadd_conditionneed to return boolean success/fail of the condition.Optional keyword arguments:
message= define a custom message to be used in the raised exceptionfatal= if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise ParseExceptioncall_during_try= boolean to indicate if this method should be called during internal tryParse calls, default=False
Example:
>>> integer = Word(nums).set_parse_action(lambda toks: int(toks[0])) >>> year_int = integer.copy().add_condition( ... lambda toks: toks[0] >= 2000, ... message="Only support years 2000 and later") >>> date_str = year_int + '/' + integer + '/' + integer >>> result = date_str.parse_string("1999/12/31") Traceback (most recent call last): ParseException: Only support years 2000 and later...
- add_parse_action(
- *fns: Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any],
- call_during_try: bool = False,
- **kwargs: Any,
Add one or more parse actions to expression’s list of parse actions. See
set_parse_action.See examples in
copy.
- canParseNext(instring: str, loc: int, do_actions: bool = False) bool¶
Deprecated since version 3.0.0: Use
can_parse_nextinstead
- copy() ParserElement¶
Make a copy of this
ParserElement. Useful for defining different parse actions for the same parsing pattern, using copies of the original parse element.Example:
integer = Word(nums).set_parse_action( lambda toks: int(toks[0])) integerK = integer.copy().add_parse_action( lambda toks: toks[0] * 1024) + Suppress("K") integerM = integer.copy().add_parse_action( lambda toks: toks[0] * 1024 * 1024) + Suppress("M") print( (integerK | integerM | integer)[1, ...].parse_string( "5K 100 640K 256M") )
prints:
[5120, 100, 655360, 268435456]
Equivalent form of
expr.copy()is justexpr():integerM = integer().add_parse_action( lambda toks: toks[0] * 1024 * 1024) + Suppress("M")
- create_diagram(
- output_html: TextIO | Path | str,
- vertical: int = 3,
- show_results_names: bool = False,
- show_groups: bool = False,
- embed: bool = False,
- show_hidden: bool = False,
- **kwargs,
Create a railroad diagram for the parser.
Parameters:
output_html(str or file-like object) - output target for generated diagram HTMLvertical(int) - threshold for formatting multiple alternatives vertically instead of horizontally (default=3)show_results_names- bool flag whether diagram should show annotations for defined results namesshow_groups- bool flag whether groups should be highlighted with an unlabeled surrounding boxshow_hidden- bool flag to show diagram elements for internal elements that are usually hiddenembed- bool flag whether generated HTML should omit <HEAD>, <BODY>, and <DOCTYPE> tags to embed the resulting HTML in an enclosing HTML sourcehead- str containing additional HTML to insert into the <HEAD> section of the generated code; can be used to insert custom CSS stylingbody- str containing additional HTML to insert at the beginning of the <BODY> section of the generated code
Additional diagram-formatting keyword arguments can also be included; see railroad.Diagram class.
Changed in version 3.1.0:
embedargument added.
- property defaultName: str¶
- static disableMemoization() None¶
Deprecated since version 3.0.0: Use
disable_memoizationinstead
- static disable_memoization() None¶
Disables active Packrat or Left Recursion parsing and their memoization
This method also works if neither Packrat nor Left Recursion are enabled. This makes it safe to call before activating Packrat nor Left Recursion to clear any previous settings.
- static enableLeftRecursion(cache_size_limit: int | None = None, *, force=False) None¶
Deprecated since version 3.0.0: Use
enable_left_recursioninstead
- static enablePackrat(cache_size_limit: int | None = 128, *, force: bool = False) None¶
Deprecated since version 3.0.0: Use
enable_packratinstead
- static enable_left_recursion(cache_size_limit: int | None = None, *, force=False) None¶
Enables “bounded recursion” parsing, which allows for both direct and indirect left-recursion. During parsing, left-recursive
Forwardelements are repeatedly matched with a fixed recursion depth that is gradually increased until finding the longest match.Example:
import pyparsing as pp pp.ParserElement.enable_left_recursion() E = pp.Forward("E") num = pp.Word(pp.nums) # match `num`, or `num '+' num`, or `num '+' num '+' num`, ... E <<= E + '+' - num | num print(E.parse_string("1+2+3+4"))
prints:
['1', '+', '2', '+', '3', '+', '4']
Recursion search naturally memoizes matches of
Forwardelements and may thus skip reevaluation of parse actions during backtracking. This may break programs with parse actions which rely on strict ordering of side-effects.Parameters:
cache_size_limit- (default=``None``) - memoize at most this manyForwardelements during matching; ifNone(the default), memoize allForwardelements.
Bounded Recursion parsing works similar but not identical to Packrat parsing, thus the two cannot be used together. Use
force=Trueto disable any previous, conflicting settings.
- static enable_packrat(cache_size_limit: int | None = 128, *, force: bool = False) None¶
Enables “packrat” parsing, which adds memoizing to the parsing logic. Repeated parse attempts at the same string location (which happens often in many complex grammars) can immediately return a cached value, instead of re-executing parsing/validating code. Memoizing is done of both valid results and parsing exceptions.
Parameters:
cache_size_limit- (default=128) - if an integer value is provided will limit the size of the packrat cache; if None is passed, then the cache size will be unbounded; if 0 is passed, the cache will be effectively disabled.
This speedup may break existing programs that use parse actions that have side-effects. For this reason, packrat parsing is disabled when you first import pyparsing. To activate the packrat feature, your program must call the class method
ParserElement.enable_packrat. For best results, callenable_packrat()immediately after importing pyparsing.Example:
import pyparsing pyparsing.ParserElement.enable_packrat()
Packrat parsing works similar but not identical to Bounded Recursion parsing, thus the two cannot be used together. Use
force=Trueto disable any previous, conflicting settings.
- ignore(other: ParserElement) ParserElement¶
Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns.
Example:
>>> patt = Word(alphas)[...] >>> print(patt.parse_string('ablaj /* comment */ lskjd')) ['ablaj'] >>> patt = Word(alphas)[...].ignore(c_style_comment) >>> print(patt.parse_string('ablaj /* comment */ lskjd')) ['ablaj', 'lskjd']
- ignoreWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
ignore_whitespaceinstead
- ignore_whitespace(recursive: bool = True) ParserElement¶
Enables the skipping of whitespace before matching the characters in the
ParserElement’s defined pattern.- Parameters:
recursive – If
True(the default), also enable whitespace skipping in child elements (if any)
- static inlineLiteralsUsing(cls: type) None¶
Deprecated since version 3.0.0: Use
inline_literals_usinginstead
- static inline_literals_using(cls: type) None¶
Set class to be used for inclusion of string literals into a parser.
Example:
# default literal class used is Literal >>> integer = Word(nums) >>> date_str = ( ... integer("year") + '/' ... + integer("month") + '/' ... + integer("day") ... ) >>> date_str.parse_string("1999/12/31") ParseResults(['1999', '/', '12', '/', '31'], {'year': '1999', 'month': '12', 'day': '31'}) # change to Suppress >>> ParserElement.inline_literals_using(Suppress) >>> date_str = ( ... integer("year") + '/' ... + integer("month") + '/' ... + integer("day") ... ) >>> date_str.parse_string("1999/12/31") ParseResults(['1999', '12', '31'], {'year': '1999', 'month': '12', 'day': '31'}) # Reset >>> ParserElement.inline_literals_using(Literal)
- leaveWhitespace(recursive: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
leave_whitespaceinstead
- leave_whitespace(recursive: bool = True) ParserElement¶
Disables the skipping of whitespace before matching the characters in the
ParserElement’s defined pattern. This is normally only used internally by the pyparsing module, but may be needed in some whitespace-sensitive grammars.- Parameters:
recursive – If true (the default), also disable whitespace skipping in child elements (if any)
- matches(test_string: str, parse_all: bool = True, **kwargs) bool¶
Method for quick testing of a parser against a test string. Good for simple inline microtests of sub expressions while building up larger parser.
- Parameters:
test_string – to test against this expression for a match
parse_all – flag to pass to
parse_string()when running tests
Example:
>>> expr = Word(nums) >>> expr.matches("100") True
- property name: str¶
Returns a user-defined name if available, but otherwise defaults back to the auto-generated name
- packrat_cache: _CacheType = {}¶
- packrat_cache_lock = <unlocked _thread.RLock object owner=0 count=0>¶
- packrat_cache_stats = [0, 0]¶
- parseFile(
- file_or_filename: str | Path | TextIO,
- encoding: str = 'utf-8',
- parse_all: bool = False,
- **kwargs,
Deprecated since version 3.0.0: Use
parse_fileinstead
- parseString(
- instring: str,
- parse_all: bool = False,
- **kwargs,
Deprecated since version 3.0.0: Use
parse_stringinstead
- parseWithTabs() ParserElement¶
Deprecated since version 3.0.0: Use
parse_with_tabsinstead
- parse_file(
- file_or_filename: str | Path | TextIO,
- encoding: str = 'utf-8',
- parse_all: bool = False,
- **kwargs,
Execute the parse expression on the given file or filename. If a filename is specified (instead of a file object), the entire file is opened, read, and closed before parsing.
- parse_string(
- instring: str,
- parse_all: bool = False,
- **kwargs,
Parse a string with respect to the parser definition. This function is intended as the primary interface to the client code.
- Parameters:
instring – The input string to be parsed.
parse_all – If set, the entire input string must match the grammar.
parseAll – retained for pre-PEP8 compatibility, will be removed in a future release.
- Raises:
ParseException – Raised if
parse_allis set and the input string does not match the whole grammar.- Returns:
the parsed data as a
ParseResultsobject, which may be accessed as a list, a dict, or an object with attributes if the given parser includes results names.
If the input string is required to match the entire grammar,
parse_allflag must be set toTrue. This is also equivalent to ending the grammar withStringEnd().To report proper column numbers,
parse_stringoperates on a copy of the input string where all tabs are converted to spaces (8 spaces per tab, as per the default instring.expandtabs). If the input string contains tabs and the grammar uses parse actions that use thelocargument to index into the string being parsed, one can ensure a consistent view of the input string by doing one of the following:calling
parse_with_tabson your grammar before callingparse_string(seeparse_with_tabs),define your parse action using the full
(s,loc,toks)signature, and reference the input string using the parse action’ssargument, orexplicitly expand the tabs in your input string before calling
parse_string.
Examples:
By default, partial matches are OK.
>>> res = Word('a').parse_string('aaaaabaaa') >>> print(res) ['aaaaa']
The parsing behavior varies by the inheriting class of this abstract class. Please refer to the children directly to see more examples.
It raises an exception if parse_all flag is set and instring does not match the whole grammar.
>>> res = Word('a').parse_string('aaaaabaaa', parse_all=True) Traceback (most recent call last): ParseException: Expected end of text, found 'b' ...
- parse_with_tabs() ParserElement¶
Overrides default behavior to expand
<TAB>s to spaces before parsing the input string. Must be called beforeparse_stringwhen the input grammar contains elements that match<TAB>characters.
- recursion_lock = <unlocked _thread.RLock object owner=0 count=0>¶
- recursion_memos: collections.abc.MutableMapping[tuple[int, Forward, bool], tuple[int, ParseResults | Exception]] = {}¶
- static resetCache() None¶
Deprecated since version 3.0.0: Use
reset_cacheinstead
- static reset_cache() None¶
Clears caches used by packrat and left-recursion.
- runTests(
- tests: str | list[str],
- parse_all: bool = True,
- comment: ParserElement | str | None = '#',
- full_dump: bool = True,
- print_results: bool = True,
- failure_tests: bool = False,
- post_parse: Callable[[str, ParseResults], str | None] | None = None,
- file: TextIO | None = None,
- with_line_numbers: bool = False,
- *,
- parseAll: bool = True,
- fullDump: bool = True,
- printResults: bool = True,
- failureTests: bool = False,
- postParse: Callable[[str, ParseResults], str | None] | None = None,
Deprecated since version 3.0.0: Use
run_testsinstead
- run_tests(
- tests: str | list[str],
- parse_all: bool = True,
- comment: ParserElement | str | None = '#',
- full_dump: bool = True,
- print_results: bool = True,
- failure_tests: bool = False,
- post_parse: Callable[[str, ParseResults], str | None] | None = None,
- file: TextIO | None = None,
- with_line_numbers: bool = False,
- *,
- parseAll: bool = True,
- fullDump: bool = True,
- printResults: bool = True,
- failureTests: bool = False,
- postParse: Callable[[str, ParseResults], str | None] | None = None,
Execute the parse expression on a series of test strings, showing each test, the parsed results or where the parse failed. Quick and easy way to run a parse expression against a list of sample strings.
Parameters:
tests- a list of separate test strings, or a multiline string of test stringsparse_all- (default=True) - flag to pass toparse_stringwhen running testscomment- (default='#') - expression for indicating embedded comments in the test string; pass None to disable comment filteringfull_dump- (default=True) - dump results as list followed by results names in nested outline; if False, only dump nested listprint_results- (default=True) prints test output to stdoutfailure_tests- (default=False) indicates if these tests are expected to fail parsingpost_parse- (default=None) optional callback for successful parse results; called as fn(test_string, parse_results) and returns a string to be added to the test outputfile- (default=None) optional file-like object to which test output will be written; if None, will default tosys.stdoutwith_line_numbers- default=False) show test strings with line and column numbers
Returns: a (success, results) tuple, where success indicates that all tests succeeded (or failed if
failure_testsis True), and the results contain a list of lines of each test’s outputPassing example:
number_expr = pyparsing_common.number.copy() result = number_expr.run_tests(''' # unsigned integer 100 # negative integer -100 # float with scientific notation 6.02e23 # integer with scientific notation 1e-12 # negative decimal number without leading digit -.100 ''') print("Success" if result[0] else "Failed!")
prints:
# unsigned integer 100 [100] # negative integer -100 [-100] # float with scientific notation 6.02e23 [6.02e+23] # integer with scientific notation 1e-12 [1e-12] # negative decimal number without leading digit -.100 [-0.1] Success
Failure-test example:
result = number_expr.run_tests(''' # stray character 100Z # too many '.' 3.14.159 ''', failure_tests=True) print("Success" if result[0] else "Failed!")
prints:
# stray character 100Z 100Z ^ ParseException: Expected end of text, found 'Z' ... # too many '.' 3.14.159 3.14.159 ^ ParseException: Expected end of text, found '.' ... FAIL: Expected end of text, found '.' ... SuccessEach test string must be on a single line. If you want to test a string that spans multiple lines, create a test like this:
expr = Word(alphanums)[1,...] expr.run_tests(r"this is a test\n of strings that spans \n 3 lines")
(Note that this is a raw string literal, you must include the leading
'r'.)
- scanString(
- instring: str,
- max_matches: int = _MAX_INT,
- overlap: bool = False,
- always_skip_whitespace=True,
- *,
- debug: bool = False,
- **kwargs,
Deprecated since version 3.0.0: Use
scan_stringinstead
- scan_string(
- instring: str,
- max_matches: int = _MAX_INT,
- overlap: bool = False,
- always_skip_whitespace=True,
- *,
- debug: bool = False,
- **kwargs,
Scan the input string for expression matches. Each match will return the matching tokens, start location, and end location. May be called with optional
max_matchesargument, to clip scanning after ‘n’ matches are found. Ifoverlapis specified, then overlapping matches will be reported.Note that the start and end locations are reported relative to the string being parsed. See
parse_stringfor more information on parsing strings with embedded tabs.Example:
source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" print(source) for tokens, start, end in Word(alphas).scan_string(source): print(' '*start + '^'*(end-start)) print(' '*start + tokens[0])
prints:
sldjf123lsdjjkf345sldkjf879lkjsfd987 ^^^^^ sldjf ^^^^^^^ lsdjjkf ^^^^^^ sldkjf ^^^^^^ lkjsfd
- searchString(
- instring: str,
- max_matches: int = _MAX_INT,
- *,
- debug: bool = False,
- **kwargs,
Deprecated since version 3.0.0: Use
search_stringinstead
- search_string(
- instring: str,
- max_matches: int = _MAX_INT,
- *,
- debug: bool = False,
- **kwargs,
Another extension to
scan_string, simplifying the access to the tokens found to match the given parse expression. May be called with optionalmax_matchesargument, to clip searching after ‘n’ matches are found.Example:
quote = '''More than Iron, more than Lead, more than Gold I need Electricity''' # a capitalized word starts with an uppercase letter, # followed by zero or more lowercase letters cap_word = Word(alphas.upper(), alphas.lower()) print(cap_word.search_string(quote)) # the sum() builtin can be used to merge results # into a single ParseResults object print(sum(cap_word.search_string(quote)))
prints:
[['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity']
- setBreak(break_flag: bool = True) ParserElement¶
Deprecated since version 3.0.0: Use
set_breakinstead
- setDebug(flag: bool = True, recurse: bool = False) ParserElement¶
Deprecated since version 3.0.0: Use
set_debuginstead
- setDebugActions(
- start_action: Callable[[str, int, ParserElement, bool], None],
- success_action: Callable[[str, int, int, ParserElement, ParseResults, bool], None],
- exception_action: Callable[[str, int, ParserElement, Exception, bool], None],
Deprecated since version 3.0.0: Use
set_debug_actionsinstead
- static setDefaultWhitespaceChars(chars: str) None¶
Deprecated since version 3.0.0: Use
set_default_whitespace_charsinstead
- setFailAction(
- fn: Callable[[str, int, ParserElement, Exception], None],
Deprecated since version 3.0.0: Use
set_fail_actioninstead
- setName(name: str | None) ParserElement¶
Deprecated since version 3.0.0: Use
set_nameinstead
- setParseAction(
- *fns: Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any],
- call_during_try: bool = False,
- **kwargs: Any,
Deprecated since version 3.0.0: Use
set_parse_actioninstead
- setResultsName(
- name: str,
- list_all_matches: bool = False,
- **kwargs,
Deprecated since version 3.0.0: Use
set_results_nameinstead
- setWhitespaceChars(
- chars: set[str] | str,
- copy_defaults: bool = False,
Deprecated since version 3.0.0: Use
set_whitespace_charsinstead
- set_break(break_flag: bool = True) ParserElement¶
Method to invoke the Python pdb debugger when this element is about to be parsed. Set
break_flagtoTrueto enable,Falseto disable.
- set_debug(flag: bool = True, recurse: bool = False) ParserElement¶
Enable display of debugging messages while doing pattern matching. Set
flagtoTrueto enable,Falseto disable. SetrecursetoTrueto set the debug flag on this expression and all sub-expressions.Example:
wd = Word(alphas).set_name("alphaword") integer = Word(nums).set_name("numword") term = wd | integer # turn on debugging for wd wd.set_debug() term[1, ...].parse_string("abc 123 xyz 890")
prints:
Match alphaword at loc 0(1,1) abc 123 xyz 890 ^ Matched alphaword -> ['abc'] Match alphaword at loc 4(1,5) abc 123 xyz 890 ^ Match alphaword failed, ParseException raised: Expected alphaword, ... Match alphaword at loc 8(1,9) abc 123 xyz 890 ^ Matched alphaword -> ['xyz'] Match alphaword at loc 12(1,13) abc 123 xyz 890 ^ Match alphaword failed, ParseException raised: Expected alphaword, ... abc 123 xyz 890 ^ Match alphaword failed, ParseException raised: Expected alphaword, found end of text ...The output shown is that produced by the default debug actions - custom debug actions can be specified using
set_debug_actions(). Prior to attempting to match thewdexpression, the debugging message"Match <exprname> at loc <n>(<line>,<col>)"is shown. Then if the parse succeeds, a"Matched"message is shown, or an"Exception raised"message is shown. Also note the use ofset_name()to assign a human-readable name to the expression, which makes debugging and exception messages easier to understand - for instance, the default name created for theWordexpression without callingset_name()is"W:(A-Za-z)".Changed in version 3.1.0:
recurseargument added.
- set_debug_actions(
- start_action: Callable[[str, int, ParserElement, bool], None],
- success_action: Callable[[str, int, int, ParserElement, ParseResults, bool], None],
- exception_action: Callable[[str, int, ParserElement, Exception, bool], None],
Customize display of debugging messages while doing pattern matching:
- Parameters:
start_action –
method to be called when an expression is about to be parsed; should have the signature:
fn(input_string: str, location: int, expression: ParserElement, cache_hit: bool)
success_action –
method to be called when an expression has successfully parsed; should have the signature:
fn(input_string: str, start_location: int, end_location: int, expression: ParserELement, parsed_tokens: ParseResults, cache_hit: bool)
exception_action –
method to be called when expression fails to parse; should have the signature:
fn(input_string: str, location: int, expression: ParserElement, exception: Exception, cache_hit: bool)
- static set_default_whitespace_chars(chars: str) None¶
Overrides the default whitespace chars
Example:
# default whitespace chars are space, <TAB> and newline >>> Word(alphas)[1, ...].parse_string("abc def\nghi jkl") ParseResults(['abc', 'def', 'ghi', 'jkl'], {}) # change to just treat newline as significant >>> ParserElement.set_default_whitespace_chars(" \t") >>> Word(alphas)[1, ...].parse_string("abc def\nghi jkl") ParseResults(['abc', 'def'], {}) # Reset to default >>> ParserElement.set_default_whitespace_chars(" \n\t\r")
- set_fail_action(
- fn: Callable[[str, int, ParserElement, Exception], None],
Define action to perform if parsing fails at this expression. Fail acton fn is a callable function that takes the arguments
fn(s, loc, expr, err)where:s= string being parsedloc= location where expression match was attempted and failedexpr= the parse expression that failederr= the exception thrown
The function returns no value. It may throw
ParseFatalExceptionif it is desired to stop parsing immediately.
- set_name(name: str | None) ParserElement¶
Define name for this expression, makes debugging and exception messages clearer. If __diag__.enable_debug_on_named_expressions is set to True, setting a name will also enable debug for this expression.
If name is None, clears any custom name for this expression, and clears the debug flag is it was enabled via __diag__.enable_debug_on_named_expressions.
Example:
>>> integer = Word(nums) >>> integer.parse_string("ABC") Traceback (most recent call last): ParseException: Expected W:(0-9) (at char 0), (line:1, col:1) >>> integer.set_name("integer") integer >>> integer.parse_string("ABC") Traceback (most recent call last): ParseException: Expected integer (at char 0), (line:1, col:1)
Changed in version 3.1.0: Accept
Noneas thenameargument.
- set_parse_action(
- *fns: Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any],
- call_during_try: bool = False,
- **kwargs: Any,
Define one or more actions to perform when successfully matching parse element definition.
Parse actions can be called to perform data conversions, do extra validation, update external data structures, or enhance or replace the parsed tokens. Each parse action
fnis a callable method with 0-3 arguments, called asfn(s, loc, toks),fn(loc, toks),fn(toks), or justfn(), where:s= the original string being parsed (see note below)loc= the location of the matching substringtoks= a list of the matched tokens, packaged as aParseResultsobject
The parsed tokens are passed to the parse action as ParseResults. They can be modified in place using list-style append, extend, and pop operations to update the parsed list elements; and with dictionary-style item set and del operations to add, update, or remove any named results. If the tokens are modified in place, it is not necessary to return them with a return statement.
Parse actions can also completely replace the given tokens, with another
ParseResultsobject, or with some entirely different object (common for parse actions that perform data conversions). A convenient way to build a new parse result is to define the values using a dict, and then create the return value usingParseResults.from_dict.If None is passed as the
fnparse action, all previously added parse actions for this expression are cleared.Optional keyword arguments:
- Parameters:
call_during_try – (default=
False) indicate if parse action should be run during lookaheads and alternate testing. For parse actions that have side effects, it is important to only call the parse action once it is determined that it is being called as part of a successful parse. For parse actions that perform additional validation, thencall_during_tryshould be passed as True, so that the validation code is included in the preliminary “try” parses.
Note
The default parsing behavior is to expand tabs in the input string before starting the parsing process. See
parse_string()for more information on parsing strings containing<TAB>s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string.Example: Parse dates in the form
YYYY/MM/DD¶Setup code:
def convert_to_int(toks): '''a parse action to convert toks from str to int at parse time''' return int(toks[0]) def is_valid_date(instring, loc, toks): '''a parse action to verify that the date is a valid date''' from datetime import date year, month, day = toks[::2] try: date(year, month, day) except ValueError: raise ParseException(instring, loc, "invalid date given") integer = Word(nums) date_str = integer + '/' + integer + '/' + integer # add parse actions integer.set_parse_action(convert_to_int) date_str.set_parse_action(is_valid_date)
Successful parse - note that integer fields are converted to ints:
print(date_str.parse_string("1999/12/31"))
prints:
[1999, '/', 12, '/', 31]
Failure - invalid date:
date_str.parse_string("1999/13/31")
prints:
Traceback (most recent call last): ParseException: invalid date given, found '1999' ...
- set_results_name(
- name: str,
- list_all_matches: bool = False,
- **kwargs,
Define name for referencing matching tokens as a nested attribute of the returned parse results.
Normally, results names are assigned as you would assign keys in a dict: any existing value is overwritten by later values. If it is necessary to keep all values captured for a particular results name, call
set_results_namewithlist_all_matches= True.NOTE:
set_results_namereturns a copy of the originalParserElementobject; this is so that the client can define a basic element, such as an integer, and reference it in multiple places with different names.You can also set results names using the abbreviated syntax,
expr("name")in place ofexpr.set_results_name("name")- see__call__(). Iflist_all_matchesis required, useexpr("name*").Example:
integer = Word(nums) date_str = (integer.set_results_name("year") + '/' + integer.set_results_name("month") + '/' + integer.set_results_name("day")) # equivalent form: date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
- set_whitespace_chars(
- chars: set[str] | str,
- copy_defaults: bool = False,
Overrides the default whitespace chars
- split(
- instring: str,
- maxsplit: int = _MAX_INT,
- include_separators: bool = False,
- **kwargs,
Generator method to split a string using the given expression as a separator. May be called with optional
maxsplitargument, to limit the number of splits; and the optionalinclude_separatorsargument (default=False), if the separating matching text should be included in the split results.Example:
punc = one_of(list(".,;:/-!?")) print(list(punc.split( "This, this?, this sentence, is badly punctuated!")))
prints:
['This', ' this', '', ' this sentence', ' is badly punctuated', '']
- suppress() ParserElement¶
Suppresses the output of this
ParserElement; useful to keep punctuation from cluttering up returned output.
- suppress_warning(
- warning_type: Diagnostics,
Suppress warnings emitted for a particular diagnostic on this expression.
Example:
>>> label = pp.Word(pp.alphas) # Normally using an empty Forward in a grammar # would print a warning, but we can suppress that >>> base = pp.Forward().suppress_warning( ... pp.Diagnostics.warn_on_parse_using_empty_Forward) >>> grammar = base | label >>> print(grammar.parse_string("x")) ['x']
- transformString(instring: str, *, debug: bool = False) str¶
Deprecated since version 3.0.0: Use
transform_stringinstead
- transform_string(instring: str, *, debug: bool = False) str¶
Extension to
scan_string, to modify matching text with modified tokens that may be returned from a parse action. To usetransform_string, define a grammar and attach a parse action to it that modifies the returned token list. Invokingtransform_string()on a target string will then scan for matches, and replace the matched text patterns according to the logic in the parse action.transform_string()returns the resulting transformed string.Example:
quote = '''now is the winter of our discontent, made glorious summer by this sun of york.''' wd = Word(alphas) wd.set_parse_action(lambda toks: toks[0].title()) print(wd.transform_string(quote))
prints:
Now Is The Winter Of Our Discontent, Made Glorious Summer By This Sun Of York.
- tryParse(
- instring: str,
- loc: int,
- *,
- raise_fatal: bool = False,
- do_actions: bool = False,
Deprecated since version 3.0.0: Use
try_parseinstead
- classmethod using_each(seq, **class_kwargs)¶
Yields a sequence of
class(obj, **class_kwargs)for obj in seq.Example:
LPAR, RPAR, LBRACE, RBRACE, SEMI = Suppress.using_each("(){};")
Added in version 3.1.0.
- validate(validateTrace=None) None¶
Deprecated since version 3.0.0: Do not use to check for left recursion.
Check defined expressions for valid structure, check for infinite recursive definitions.
- verbose_stacktrace: bool = False¶
- visit_all()¶
General-purpose method to yield all expressions and sub-expressions in a grammar. Typically just for internal use.
- class PrecededBy(expr: ParserElement | str, retreat: int = 0)¶
Bases:
ParseElementEnhanceLookbehind matching of the given parse expression.
PrecededBydoes not advance the parsing position within the input string, it only verifies that the specified parse expression matches prior to the current position.PrecededByalways returns a null token list, but if a results name is defined on the given expression, it is returned.Parameters:
expr- expression that must match prior to the current parse locationretreat- (default=None) - (int) maximum number of characters to lookbehind prior to the current parse location
If the lookbehind expression is a string,
Literal,Keyword, or aWordorCharsNotInwith a specified exact or maximum length, then the retreat parameter is not required. Otherwise, retreat must be specified to give a maximum number of characters to look back from the current parse position for a lookbehind match.Example:
# VB-style variable names with type prefixes int_var = PrecededBy("#") + pyparsing_common.identifier str_var = PrecededBy("$") + pyparsing_common.identifier
- exception PyparsingDeprecationWarning¶
Bases:
PyparsingWarning,DeprecationWarningBase warning class for all pyparsing deprecation warnings
- exception PyparsingDiagnosticWarning¶
Bases:
PyparsingWarningBase warning class for all pyparsing diagnostic warnings
- exception PyparsingWarning¶
Bases:
UserWarningBase warning class for all pyparsing warnings
- class QuotedString(
- quote_char: str = '',
- esc_char: str | None = None,
- esc_quote: str | None = None,
- multiline: bool = False,
- unquote_results: bool = True,
- end_quote_char: str | None = None,
- convert_whitespace_escapes: bool = True,
- **kwargs,
Bases:
TokenToken for matching strings that are delimited by quoting characters.
Defined with the following parameters:
quote_char- string of one or more characters defining the quote delimiting stringesc_char- character to re_escape quotes, typically backslash (default=None)esc_quote- special quote sequence to re_escape an embedded quote string (such as SQL’s""to re_escape an embedded") (default=None)multiline- boolean indicating whether quotes can span multiple lines (default=False)unquote_results- boolean indicating whether the matched text should be unquoted (default=True)end_quote_char- string of one or more characters defining the end of the quote delimited string (default=None=> same as quote_char)convert_whitespace_escapes- convert escaped whitespace ('\t','\n', etc.) to actual whitespace (default=True)
Caution
convert_whitespace_escapeshas no effect ifunquote_resultsisFalse.Example:
>>> qs = QuotedString('"') >>> print(qs.search_string('lsjdf "This is the quote" sldjf')) [['This is the quote']] >>> complex_qs = QuotedString('{{', end_quote_char='}}') >>> print(complex_qs.search_string( ... 'lsjdf {{This is the "quote"}} sldjf')) [['This is the "quote"']] >>> sql_qs = QuotedString('"', esc_quote='""') >>> print(sql_qs.search_string( ... 'lsjdf "This is the quote with ""embedded"" quotes" sldjf')) [['This is the quote with "embedded" quotes']]
- ws_map = {'\\f': '\x0c', '\\n': '\n', '\\r': '\r', '\\t': '\t'}¶
- exception RecursiveGrammarException(parseElementList)¶
Bases:
ExceptionDeprecated since version 3.0.0: Only used by the deprecated
ParserElement.validate().Exception thrown by
ParserElement.validateif the grammar could be left-recursive; parser may need to enable left recursion usingParserElement.enable_left_recursion- __str__() str¶
Return str(self).
- class Regex(
- pattern: Any,
- flags: RegexFlag | int = 0,
- as_group_list: bool = False,
- as_match: bool = False,
- **kwargs,
Bases:
TokenToken for matching strings that match a given regular expression. Defined with string specifying the regular expression in a form recognized by the stdlib Python re module. If the given regex contains named groups (defined using
(?P<name>...)), these will be preserved as namedParseResults.If instead of the Python stdlib
remodule you wish to use a different RE module (such as theregexmodule), you can do so by building yourRegexobject with a compiled RE that was compiled usingregex.The parameters
patternandflagsare passed to there.compile()function as-is. See the Python re module module for an explanation of the acceptable patterns and flags.Example:
realnum = Regex(r"[+-]?\d+\.\d*") # ref: https://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression roman = Regex(r"M{0,4}(CM|CD|D?{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") # named fields in a regex will be returned as named results date = Regex(r'(?P<year>\d{4})-(?P<month>\d\d?)-(?P<day>\d\d?)') # the Regex class will accept regular expressions compiled using the # re module import re parser = pp.Regex(re.compile(r'[0-9]'))
- property re: Pattern¶
Property returning the compiled regular expression for this Regex.
Generally only used internally by pyparsing.
- property re_match: Callable[[str, int], Any]¶
- sub(repl: str) ParserElement¶
Return
Regexwith an attached parse action to transform the parsed result as if called using re.sub(expr, repl, string).Example:
make_html = Regex(r"(\w+):(.*?):").sub(r"<\1>\2</\1>") print(make_html.transform_string("h1:main title:"))
<h1>main title</h1>
- class SkipTo(
- other: ParserElement | str,
- include: bool = False,
- ignore: ParserElement | str | None = None,
- fail_on: ParserElement | str | None = None,
- **kwargs,
Bases:
ParseElementEnhanceToken for skipping over all undefined text until the matched expression is found.
- Parameters:
expr – target expression marking the end of the data to be skipped
include – if
True, the target expression is also parsed (the skipped text and target expression are returned as a 2-element list) (default=False).ignore – (default=
None) used to define grammars (typically quoted strings and comments) that might contain false matches to the target expressionfail_on – (default=
None) define expressions that are not allowed to be included in the skipped test; if found before the target expression is found, theSkipTois not a match
Example:
report = ''' Outstanding Issues Report - 1 Jan 2000 # | Severity | Description | Days Open -----+----------+-------------------------------------------+----------- 101 | Critical | Intermittent system crash | 6 94 | Cosmetic | Spelling error on Login ('log|n') | 14 79 | Minor | System slow when running too many reports | 47 ''' integer = Word(nums) SEP = Suppress('|') # use SkipTo to simply match everything up until the next SEP # - ignore quoted strings, so that a '|' character inside a quoted string does not match # - parse action will call token.strip() for each matched token, i.e., the description body string_data = SkipTo(SEP, ignore=quoted_string) string_data.set_parse_action(token_map(str.strip)) ticket_expr = (integer("issue_num") + SEP + string_data("sev") + SEP + string_data("desc") + SEP + integer("days_open")) for tkt in ticket_expr.search_string(report): print(tkt.dump())
prints:
['101', 'Critical', 'Intermittent system crash', '6'] - days_open: '6' - desc: 'Intermittent system crash' - issue_num: '101' - sev: 'Critical' ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - days_open: '14' - desc: "Spelling error on Login ('log|n')" - issue_num: '94' - sev: 'Cosmetic' ['79', 'Minor', 'System slow when running too many reports', '47'] - days_open: '47' - desc: 'System slow when running too many reports' - issue_num: '79' - sev: 'Minor'- ignore(expr)¶
Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns.
- class StringEnd¶
Bases:
PositionTokenMatches if current position is at the end of the parse string
- class StringStart¶
Bases:
PositionTokenMatches if current position is at the beginning of the parse string
- class Suppress(expr: ParserElement | str, savelist: bool = False)¶
Bases:
TokenConverterConverter for ignoring the results of a parsed expression.
Example:
>>> source = "a, b, c,d" >>> wd = Word(alphas) >>> wd_list1 = wd + (',' + wd)[...] >>> print(wd_list1.parse_string(source)) ['a', ',', 'b', ',', 'c', ',', 'd'] # often, delimiters that are useful during parsing are just in the # way afterward - use Suppress to keep them out of the parsed output >>> wd_list2 = wd + (Suppress(',') + wd)[...] >>> print(wd_list2.parse_string(source)) ['a', 'b', 'c', 'd'] # Skipped text (using '...') can be suppressed as well >>> source = "lead in START relevant text END trailing text" >>> start_marker = Keyword("START") >>> end_marker = Keyword("END") >>> find_body = Suppress(...) + start_marker + ... + end_marker >>> print(find_body.parse_string(source)) ['START', 'relevant text ', 'END']
(See also
DelimitedList.)- __add__(other) ParserElement¶
Implementation of
+operator - returnsAnd. Adding strings to aParserElementconverts them toLiterals by default.Example:
greet = Word(alphas) + "," + Word(alphas) + "!" hello = "Hello, World!" print(hello, "->", greet.parse_string(hello))
prints:
Hello, World! -> ['Hello', ',', 'World', '!']
...may be used as a parse expression as a short form ofSkipTo:Literal('start') + ... + Literal('end')
is equivalent to:
Literal('start') + SkipTo('end')("_skipped*") + Literal('end')
Note that the skipped text is returned with ‘_skipped’ as a results name, and to support having multiple skips in the same parser, the value returned is a list of all skipped text.
- __sub__(other) ParserElement¶
Implementation of
-operator, returnsAndwith error stop
- suppress() ParserElement¶
Suppresses the output of this
ParserElement; useful to keep punctuation from cluttering up returned output.
- class Tag(tag_name: str, value: Any = True)¶
Bases:
TokenA meta-element for inserting a named result into the parsed tokens that may be checked later in a parse action or while processing the parsed results. Accepts an optional tag value, defaulting to True.
Example:
>>> end_punc = "." | ("!" + Tag("enthusiastic")) >>> greeting = "Hello," + Word(alphas) + end_punc >>> result = greeting.parse_string("Hello, World.") >>> print(result.dump()) ['Hello,', 'World', '.'] >>> result = greeting.parse_string("Hello, World!") >>> print(result.dump()) ['Hello,', 'World', '!'] - enthusiastic: True .. versionadded:: 3.1.0
- class Token¶
Bases:
ParserElementAbstract
ParserElementsubclass, for defining atomic matching patterns.
- class TokenConverter(expr: ParserElement | str, savelist=False)¶
Bases:
ParseElementEnhanceAbstract subclass of
ParseElementEnhance, for converting parsed results.
- class White(ws: str = ' \t\r\n', min: int = 1, max: int = 0, exact: int = 0)¶
Bases:
TokenSpecial matching class for matching whitespace. Normally, whitespace is ignored by pyparsing grammars. This class is included when some whitespace structures are significant. Define with a string containing the whitespace characters to be matched; default is
" \t\r\n". Also takes optionalmin,max, andexactarguments, as defined for theWordclass.- whiteStrs = {'\t': '<TAB>', '\n': '<LF>', '\x0c': '<FF>', '\r': '<CR>', ' ': '<SP>', '\xa0': '<NBSP>', '\u1680': '<OGHAM_SPACE_MARK>', '\u180e': '<MONGOLIAN_VOWEL_SEPARATOR>', '\u2000': '<EN_QUAD>', '\u2001': '<EM_QUAD>', '\u2002': '<EN_SPACE>', '\u2003': '<EM_SPACE>', '\u2004': '<THREE-PER-EM_SPACE>', '\u2005': '<FOUR-PER-EM_SPACE>', '\u2006': '<SIX-PER-EM_SPACE>', '\u2007': '<FIGURE_SPACE>', '\u2008': '<PUNCTUATION_SPACE>', '\u2009': '<THIN_SPACE>', '\u200a': '<HAIR_SPACE>', '\u200b': '<ZERO_WIDTH_SPACE>', '\u202f': '<NNBSP>', '\u205f': '<MMSP>', '\u3000': '<IDEOGRAPHIC_SPACE>'}¶
- class Word(
- init_chars: str = '',
- body_chars: str | None = None,
- min: int = 1,
- max: int = 0,
- exact: int = 0,
- as_keyword: bool = False,
- exclude_chars: str | None = None,
- **kwargs,
Bases:
TokenToken for matching words composed of allowed character sets.
Parameters:
init_chars- string of all characters that should be used to match as a word; “ABC” will match “AAA”, “ABAB”, “CBAC”, etc.; ifbody_charsis also specified, then this is the string of initial charactersbody_chars- string of characters that can be used for matching after a matched initial character as given ininit_chars; if omitted, same as the initial characters (default=``None``)min- minimum number of characters to match (default=1)max- maximum number of characters to match (default=0)exact- exact number of characters to match (default=0)as_keyword- match as a keyword (default=``False``)exclude_chars- characters that might be found in the inputbody_charsstring but which should not be accepted for matching ;useful to define a word of all printables except for one or two characters, for instance (default=``None``)
srangeis useful for defining custom character set strings for definingWordexpressions, using range notation from regular expression character sets.A common mistake is to use
Wordto match a specific literal string, as inWord("Address"). Remember thatWorduses the string argument to define sets of matchable characters. This expression would match “Add”, “AAA”, “dAred”, or any other word made up of the characters ‘A’, ‘d’, ‘r’, ‘e’, and ‘s’. To match an exact literal string, useLiteralorKeyword.pyparsing includes helper strings for building Words:
alphasnumsalphanumshexnumsalphas8bit(alphabetic characters in ASCII range 128-255 - accented, tilded, umlauted, etc.)punc8bit(non-alphabetic characters in ASCII range 128-255 - currency, symbols, superscripts, diacriticals, etc.)printables(any non-whitespace character)
alphas,nums, andprintablesare also defined in several Unicode sets - seepyparsing_unicode.Example:
# a word composed of digits integer = Word(nums) # Two equivalent alternate forms: Word("0123456789") Word(srange("[0-9]")) # a word with a leading capital, and zero or more lowercase capitalized_word = Word(alphas.upper(), alphas.lower()) # hostnames are alphanumeric, with leading alpha, and '-' hostname = Word(alphas, alphanums + '-') # roman numeral # (not a strict parser, accepts invalid mix of characters) roman = Word("IVXLCDM") # any string of non-whitespace characters, except for ',' csv_value = Word(printables, exclude_chars=",")
- Raises:
ValueError – If
minandmaxare both specified and the testmin <= maxfails.
Changed in version 3.1.0: Raises
ValueErrorifmin>max.- property initChars: set[str]¶
Deprecated since version 3.3.0: use init_chars instead.
Property returning the initial chars to be used when matching this Word expression. If no body chars were specified, the initial characters will also be the body characters.
- class WordEnd(word_chars: str = printables, **kwargs)¶
Bases:
PositionTokenMatches if the current position is at the end of a
Word, and is not followed by any character in a given set ofword_chars(default=printables). To emulate thebehavior of regular expressions, useWordEnd(alphanums).WordEndwill also match at the end of the string being parsed, or at the end of a line.
- class WordStart(word_chars: str = printables, **kwargs)¶
Bases:
PositionTokenMatches if the current position is at the beginning of a
Word, and is not preceded by any character in a given set ofword_chars(default=printables). To emulate thebehavior of regular expressions, useWordStart(alphanums).WordStartwill also match at the beginning of the string being parsed, or at the beginning of a line.
- class ZeroOrMore(
- expr: str | ParserElement,
- stop_on: ParserElement | str | None = None,
- **kwargs,
Bases:
_MultipleMatchOptional repetition of zero or more of the given expression.
Parameters:
expr- expression that must match zero or more timesstop_on- expression for a terminating sentinel (only required if the sentinel would ordinarily match the repetition expression) - (default=None)
Example: similar to
OneOrMore
- autoname_elements() None¶
Utility to simplify mass-naming of parser elements, for generating railroad diagram with named subdiagrams.
- col(loc: int, strg: str) int¶
Returns current column within a string, counting newlines as line separators. The first column is number 1.
Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See
ParserElement.parse_string()for more information on parsing strings containing<TAB>s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string.
- common¶
alias of
pyparsing_common
- conditionAsParseAction(
- fn: Callable[[], bool] | Callable[[ParseResults], bool] | Callable[[int, ParseResults], bool] | Callable[[str, int, ParseResults], bool],
- message: str | None = None,
- fatal: bool = False,
Deprecated since version 3.0.0: Use
condition_as_parse_actioninstead
- condition_as_parse_action(
- fn: Callable[[], bool] | Callable[[ParseResults], bool] | Callable[[int, ParseResults], bool] | Callable[[str, int, ParseResults], bool],
- message: str | None = None,
- fatal: bool = False,
Function to convert a simple predicate function that returns
TrueorFalseinto a parse action. Can be used in places when a parse action is required andParserElement.add_condition()cannot be used (such as when adding a condition to an operator level ininfix_notation).Optional keyword arguments:
- Parameters:
message – define a custom message to be used in the raised exception
fatal – if
True, will raiseParseFatalExceptionto stop parsing immediately; otherwise will raiseParseException
- countedArray(
- expr: ParserElement,
- int_expr: ParserElement | None = None,
- **kwargs,
Deprecated since version 3.0.0: Use
counted_arrayinstead
- counted_array(
- expr: ParserElement,
- int_expr: ParserElement | None = None,
- **kwargs,
Helper to define a counted list of expressions.
This helper defines a pattern of the form:
integer expr expr expr...
where the leading integer tells how many expr expressions follow. The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed.
If
int_expris specified, it should be a pyparsing expression that produces an integer value.Examples:
>>> counted_array(Word(alphas)).parse_string('2 ab cd ef') ParseResults(['ab', 'cd'], {})
In this parser, the leading integer value is given in binary, ‘10’ indicating that 2 values are in the array:
>>> binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2)) >>> counted_array(Word(alphas), int_expr=binary_constant ... ).parse_string('10 ab cd ef') ParseResults(['ab', 'cd'], {})
If other fields must be parsed after the count but before the list items, give the fields results names and they will be preserved in the returned ParseResults:
>>> ppc = pyparsing.common >>> count_with_metadata = ppc.integer + Word(alphas)("type") >>> typed_array = counted_array(Word(alphanums), ... int_expr=count_with_metadata)("items") >>> result = typed_array.parse_string("3 bool True True False") >>> print(result.dump()) ['True', 'True', 'False'] - items: ['True', 'True', 'False'] - type: 'bool'
- delimitedList(
- expr: str | ParserElement,
- delim: str | ParserElement = ',',
- combine: bool = False,
- min: Optional[int] = None,
- max: Optional[int] = None,
- *,
- allow_trailing_delim: bool = False,
Deprecated since version 3.0.0: Use
DelimitedListinstead
- delimited_list(
- expr: str | ParserElement,
- delim: str | ParserElement = ',',
- combine: bool = False,
- min: Optional[int] = None,
- max: Optional[int] = None,
- *,
- allow_trailing_delim: bool = False,
Deprecated since version 3.0.0: Use
DelimitedListinstead
- dictOf(
- key: ParserElement,
- value: ParserElement,
Deprecated since version 3.0.0: Use
dict_ofinstead
- dict_of(
- key: ParserElement,
- value: ParserElement,
Helper to easily and clearly define a dictionary by specifying the respective patterns for the key and value. Takes care of defining the
Dict,ZeroOrMore, andGrouptokens in the proper order. The key pattern can include delimiting markers or punctuation, as long as they are suppressed, thereby leaving the significant key text. The value pattern can include named results, so that theDictresults can include named token fields.Example:
>>> text = "shape: SQUARE posn: upper left color: light blue texture: burlap" >>> data_word = Word(alphas) >>> label = data_word + FollowedBy(':') >>> attr_expr = ( ... label ... + Suppress(':') ... + OneOrMore(data_word, stop_on=label) ... .set_parse_action(' '.join)) >>> print(attr_expr[1, ...].parse_string(text).dump()) ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] >>> attr_label = label >>> attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label ... ).set_parse_action(' '.join) # similar to Dict, but simpler call format >>> result = dict_of(attr_label, attr_value).parse_string(text) >>> print(result.dump()) [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - color: 'light blue' - posn: 'upper left' - shape: 'SQUARE' - texture: 'burlap' [0]: ['shape', 'SQUARE'] [1]: ['posn', 'upper left'] [2]: ['color', 'light blue'] [3]: ['texture', 'burlap'] >>> print(result['shape']) SQUARE >>> print(result.shape) # object attribute access works too SQUARE >>> print(result.as_dict()) {'shape': 'SQUARE', 'posn': 'upper left', 'color': 'light blue', 'texture': 'burlap'}
- indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[])¶
Deprecated since version 3.0.0: Use the
IndentedBlockclass instead. Note that IndentedBlock has a difference method signature.Helper method for defining space-delimited indentation blocks, such as those used to define block statements in Python source code.
- Parameters:
blockStatementExpr – expression defining syntax of statement that is repeated within the indented block
indentStack – list created by caller to manage indentation stack (multiple
statementWithIndentedBlockexpressions within a single grammar should share a commonindentStack)indent – boolean indicating whether block must be indented beyond the current level; set to
Falsefor block of left-most statements
A valid block must contain at least one
blockStatement.(Note that indentedBlock uses internal parse actions which make it incompatible with packrat parsing.)
Example:
data = ''' def A(z): A1 B = 100 G = A2 A2 A3 B def BB(a,b,c): BB1 def BBA(): bba1 bba2 bba3 C D def spam(x,y): def eggs(z): pass ''' indentStack = [1] stmt = Forward() identifier = Word(alphas, alphanums) funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") func_body = indentedBlock(stmt, indentStack) funcDef = Group(funcDecl + func_body) rvalue = Forward() funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") rvalue << (funcCall | identifier | Word(nums)) assignment = Group(identifier + "=" + rvalue) stmt << (funcDef | assignment | identifier) module_body = stmt[1, ...] parseTree = module_body.parseString(data) parseTree.pprint()
prints:
[['def', 'A', ['(', 'z', ')'], ':', [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], 'B', ['def', 'BB', ['(', 'a', 'b', 'c', ')'], ':', [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], 'C', 'D', ['def', 'spam', ['(', 'x', 'y', ')'], ':', [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]]
- infixNotation(
- base_expr: ParserElement,
- op_list: list[tuple[ParserElement | str | tuple[ParserElement | str, ParserElement | str], int, OpAssoc, Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any] | None] | tuple[ParserElement | str | tuple[ParserElement | str, ParserElement | str], int, OpAssoc]],
- lpar: str | ParserElement = Suppress('('),
- rpar: str | ParserElement = Suppress(')'),
Deprecated since version 3.0.0: Use
infix_notationinstead
- infix_notation(
- base_expr: ParserElement,
- op_list: list[tuple[ParserElement | str | tuple[ParserElement | str, ParserElement | str], int, OpAssoc, Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any] | None] | tuple[ParserElement | str | tuple[ParserElement | str, ParserElement | str], int, OpAssoc]],
- lpar: str | ParserElement = Suppress('('),
- rpar: str | ParserElement = Suppress(')'),
Helper method for constructing grammars of expressions made up of operators working in a precedence hierarchy. Operators may be unary or binary, left- or right-associative. Parse actions can also be attached to operator expressions. The generated parser will also recognize the use of parentheses to override operator precedences (see example below).
Note: if you define a deep operator list, you may see performance issues when using infix_notation. See
ParserElement.enable_packratfor a mechanism to potentially improve your parser performance.Parameters:
- Parameters:
base_expr – expression representing the most basic operand to be used in the expression
op_list –
list of tuples, one for each operator precedence level in the expression grammar; each tuple is of the form
(op_expr, num_operands, right_left_assoc, (optional)parse_action), where:op_expris the pyparsing expression for the operator; may also be a string, which will be converted to a Literal; ifnum_operandsis 3,op_expris a tuple of two expressions, for the two operators separating the 3 termsnum_operandsis the number of terms for this operator (must be 1, 2, or 3)right_left_associs the indicator whether the operator is right or left associative, using the pyparsing-defined constantsOpAssoc.RIGHTandOpAssoc.LEFT.parse_actionis the parse action to be associated with expressions matching this operator expression (the parse action tuple member may be omitted); if the parse action is passed a tuple or list of functions, this is equivalent to callingset_parse_action(*fn)(ParserElement.set_parse_action)
lpar – expression for matching left-parentheses; if passed as a str, then will be parsed as
Suppress(lpar). If lpar is passed as an expression (such asLiteral('(')), then it will be kept in the parsed results, and grouped with them. (default=Suppress('('))rpar – expression for matching right-parentheses; if passed as a str, then will be parsed as
Suppress(rpar). If rpar is passed as an expression (such asLiteral(')')), then it will be kept in the parsed results, and grouped with them. (default=Suppress(')'))
Example:
# simple example of four-function arithmetic with ints and # variable names integer = pyparsing_common.signed_integer varname = pyparsing_common.identifier arith_expr = infix_notation(integer | varname, [ ('-', 1, OpAssoc.RIGHT), (one_of('* /'), 2, OpAssoc.LEFT), (one_of('+ -'), 2, OpAssoc.LEFT), ]) arith_expr.run_tests(''' 5+3*6 (5+3)*6 (5+x)*y -2--11 ''', full_dump=False)
prints:
5+3*6 [[5, '+', [3, '*', 6]]] (5+3)*6 [[[5, '+', 3], '*', 6]] (5+x)*y [[[5, '+', 'x'], '*', 'y']] -2--11 [[['-', 2], '-', ['-', 11]]]
- line(loc: int, strg: str) str¶
Returns the line of text containing loc within a string, counting newlines as line separators.
- lineno(loc: int, strg: str) int¶
Returns current line number within a string, counting newlines as line separators. The first line is number 1.
Note - the default parsing behavior is to expand tabs in the input string before starting the parsing process. See
ParserElement.parse_string()for more information on parsing strings containing<TAB>s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string.
- locatedExpr(expr: ParserElement) ParserElement¶
Deprecated since version 3.0.0: Use the
Locatedclass instead. Note that Located returns results with one less grouping level.Helper to decorate a returned token with its starting and ending locations in the input string.
This helper adds the following results names:
locn_start- location where matched expression beginslocn_end- location where matched expression endsvalue- the actual parsed results
Be careful if the input text contains
<TAB>characters, you may want to callParserElement.parse_with_tabs()
- makeHTMLTags(
- tag_str: str | ParserElement,
Deprecated since version 3.0.0: Use
make_html_tagsinstead
- makeXMLTags(
- tag_str: str | ParserElement,
Deprecated since version 3.0.0: Use
make_xml_tagsinstead
- make_html_tags(
- tag_str: str | ParserElement,
Helper to construct opening and closing tag expressions for HTML, given a tag name. Matches tags in either upper or lower case, attributes with namespaces and with quoted or unquoted values.
Example:
text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>' # make_html_tags returns pyparsing expressions for the opening and # closing tags as a 2-tuple a, a_end = make_html_tags("A") link_expr = a + SkipTo(a_end)("link_text") + a_end for link in link_expr.search_string(text): # attributes in the <A> tag (like "href" shown here) are # also accessible as named results print(link.link_text, '->', link.href)
prints:
pyparsing -> https://github.com/pyparsing/pyparsing/wiki
- make_xml_tags(
- tag_str: str | ParserElement,
Helper to construct opening and closing tag expressions for XML, given a tag name. Matches tags only in the given upper/lower case.
Example: similar to
make_html_tags
- matchOnlyAtCol(
- n: int,
Deprecated since version 3.0.0: Use
match_only_at_colinstead
- matchPreviousExpr(expr: ParserElement) ParserElement¶
Deprecated since version 3.0.0: Use
match_previous_exprinstead
- matchPreviousLiteral(expr: ParserElement) ParserElement¶
Deprecated since version 3.0.0: Use
match_previous_literalinstead
- match_only_at_col(
- n: int,
Helper method for defining parse actions that require matching at a specific column in the input text.
- match_previous_expr(expr: ParserElement) ParserElement¶
Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a ‘repeat’ of a previous expression. For example:
first = Word(nums) second = match_previous_expr(first) match_expr = first + ":" + second
will match
"1:1", but not"1:2". Because this matches by expressions, will not match the leading"1:1"in"1:10"; the expressions are evaluated first, and then compared, so"1"is compared with"10". Do not use with packrat parsing enabled.
- match_previous_literal(expr: ParserElement) ParserElement¶
Helper to define an expression that is indirectly defined from the tokens matched in a previous expression, that is, it looks for a ‘repeat’ of a previous expression. For example:
.. testcode::
first = Word(nums) second = match_previous_literal(first) match_expr = first + “:” + second
will match
"1:1", but not"1:2". Because this matches a previous literal, will also match the leading"1:1"in"1:10". If this is not desired, usematch_previous_expr. Do not use with packrat parsing enabled.
- nestedExpr(
- opener: str | ParserElement = '(',
- closer: str | ParserElement = ')',
- content: ParserElement | None = None,
- ignore_expr: ParserElement | None = _NO_IGNORE_EXPR_GIVEN,
- **kwargs,
Deprecated since version 3.0.0: Use
nested_exprinstead
- nested_expr(
- opener: str | ParserElement = '(',
- closer: str | ParserElement = ')',
- content: ParserElement | None = None,
- ignore_expr: ParserElement | None = _NO_IGNORE_EXPR_GIVEN,
- **kwargs,
Helper method for defining nested lists enclosed in opening and closing delimiters (
"("and")"are the default).- Parameters:
opener – str - opening character for a nested list (default=
"("); can also be a pyparsing expressioncloser – str - closing character for a nested list (default=
")"); can also be a pyparsing expressioncontent – expression for items within the nested lists
ignore_expr – expression for ignoring opening and closing delimiters (default =
quoted_string)
Parameter
ignoreExpris retained for compatibility but will be removed in a future release.If an expression is not provided for the content argument, the nested expression will capture all whitespace-delimited content between delimiters as a list of separate values.
Use the
ignore_exprargument to define expressions that may contain opening or closing characters that should not be treated as opening or closing characters for nesting, such as quoted_string or a comment expression. Specify multiple expressions using anOrorMatchFirst. The default isquoted_string, but if no expressions are to be ignored, then passNonefor this argument.Example:
data_type = one_of("void int short long char float double") decl_data_type = Combine(data_type + Opt(Word('*'))) ident = Word(alphas+'_', alphanums+'_') number = pyparsing_common.number arg = Group(decl_data_type + ident) LPAR, RPAR = map(Suppress, "()") code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment)) c_function = (decl_data_type("type") + ident("name") + LPAR + Opt(DelimitedList(arg), [])("args") + RPAR + code_body("body")) c_function.ignore(c_style_comment) source_code = ''' int is_odd(int x) { return (x%2); } int dec_to_hex(char hchar) { if (hchar >= '0' && hchar <= '9') { return (ord(hchar)-ord('0')); } else { return (10+ord(hchar)-ord('A')); } } ''' for func in c_function.search_string(source_code): print(f"{func.name} ({func.type}) args: {func.args}")
prints:
is_odd (int) args: [['int', 'x']] dec_to_hex (int) args: [['char', 'hchar']]
- nullDebugAction(*args)¶
Deprecated since version 3.0.0: Use
null_debug_actioninstead
- null_debug_action(*args)¶
‘Do-nothing’ debug action, to suppress debugging output during parsing.
- oneOf(
- strs: Iterable[str] | str,
- caseless: bool = False,
- use_regex: bool = True,
- as_keyword: bool = False,
- **kwargs,
Deprecated since version 3.0.0: Use
one_ofinstead
- one_of(
- strs: Iterable[str] | str,
- caseless: bool = False,
- use_regex: bool = True,
- as_keyword: bool = False,
- **kwargs,
Helper to quickly define a set of alternative
Literals, and makes sure to do longest-first testing when there is a conflict, regardless of the input order, but returns aMatchFirstfor best performance.- Parameters:
strs – a string of space-delimited literals, or a collection of string literals
caseless – treat all literals as caseless
use_regex – bool - as an optimization, will generate a
Regexobject; otherwise, will generate aMatchFirstobject (ifcaseless=Trueoras_keyword=True, or if creating aRegexraises an exception)as_keyword – bool - enforce
Keyword-style matching on the generated expressions
Parameters
asKeywordanduseRegexare retained for pre-PEP8 compatibility, but will be removed in a future release.Example:
comp_oper = one_of("< = > <= >= !=") var = Word(alphas) number = Word(nums) term = var | number comparison_expr = term + comp_oper + term print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12"))
prints:
[['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']]
- originalTextFor(
- expr: ParserElement,
- as_string: bool = True,
- **kwargs,
Deprecated since version 3.0.0: Use
original_text_forinstead
- original_text_for(
- expr: ParserElement,
- as_string: bool = True,
- **kwargs,
Helper to return the original, untokenized text for a given expression. Useful to restore the parsed fields of an HTML start tag into the raw tag text itself, or to revert separate tokens with intervening whitespace back to the original matching input text. By default, returns a string containing the original parsed text.
If the optional
as_stringargument is passed asFalse, then the return value is aParseResultscontaining any results names that were originally matched, and a single token containing the original matched text from the input string. So if the expression passed tooriginal_text_forcontains expressions with defined results names, you must setas_stringtoFalseif you want to preserve those results name values.The
asStringpre-PEP8 argument is retained for compatibility, but will be removed in a future release.Example:
src = "this is test <b> bold <i>text</i> </b> normal text " for tag in ("b", "i"): opener, closer = make_html_tags(tag) patt = original_text_for(opener + ... + closer) print(patt.search_string(src)[0])
prints:
['<b> bold <i>text</i> </b>'] ['<i>text</i>']
- class pyparsing_common¶
Bases:
objectHere are some common low-level expressions that may be useful in jump-starting parser development:
numeric forms (
integers,reals,scientific notation)common
programming identifiers
Parse actions:
Examples:
pyparsing_common.number.run_tests(''' # any int or real number, returned as the appropriate type 100 -100 +100 3.14159 6.02e23 1e-12 ''')
# any int or real number, returned as the appropriate type 100 [100] -100 [-100] +100 [100] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12]
pyparsing_common.fnumber.run_tests(''' # any int or real number, returned as float 100 -100 +100 3.14159 6.02e23 1e-12 ''')
# any int or real number, returned as float 100 [100.0] -100 [-100.0] +100 [100.0] 3.14159 [3.14159] 6.02e23 [6.02e+23] 1e-12 [1e-12]
pyparsing_common.hex_integer.run_tests(''' # hex numbers 100 FF ''')
# hex numbers 100 [256] FF [255]
pyparsing_common.fraction.run_tests(''' # fractions 1/2 -3/4 ''')
# fractions 1/2 [0.5] -3/4 [-0.75]
pyparsing_common.mixed_integer.run_tests(''' # mixed fractions 1 1/2 -3/4 1-3/4 ''')
# mixed fractions 1 [1] 1/2 [0.5] -3/4 [-0.75] 1-3/4 [1.75]
import uuid pyparsing_common.uuid.set_parse_action(token_map(uuid.UUID)) pyparsing_common.uuid.run_tests(''' # uuid 12345678-1234-5678-1234-567812345678 ''')
# uuid 12345678-1234-5678-1234-567812345678 [UUID('12345678-1234-5678-1234-567812345678')]- static as_datetime(s, l, t)¶
Parse action to convert parsed dates or datetimes to a Python
datetime.datetime.This parse action will use the year, month, day, etc. results names defined in the ISO8601 date expressions, but it can be used with any expression that provides one or more of these fields.
Omitted fields will default to fields from Jan 1, 00:00:00.
Invalid dates will raise a
ParseExceptionwith the error message indicating the invalid date fields.
- comma_separated_list = comma separated list¶
Predefined expression of 1 or more printable words or quoted strings, separated by commas.
- static convertToDate(fmt: str = '%Y-%m-%d')¶
Deprecated since version 3.0.0: Use
convert_to_dateinstead
- static convertToDatetime(fmt: str = '%Y-%m-%dT%H:%M:%S.%f')¶
Deprecated since version 3.0.0: Use
convert_to_datetimeinstead
- static convertToFloat(_, __, t)¶
Deprecated since version 3.0.0: Use
convert_to_floatinstead
- static convertToInteger(_, __, t)¶
Deprecated since version 3.0.0: Use
convert_to_integerinstead
- static convert_to_date(fmt: str = '%Y-%m-%d')¶
Helper to create a parse action for converting parsed date string to Python datetime.date
Params - - fmt - format to be passed to datetime.strptime (default=
"%Y-%m-%d")Example:
date_expr = pyparsing_common.iso8601_date.copy() date_expr.set_parse_action(pyparsing_common.convert_to_date()) print(date_expr.parse_string("1999-12-31"))
prints:
[datetime.date(1999, 12, 31)]
- static convert_to_datetime(fmt: str = '%Y-%m-%dT%H:%M:%S.%f')¶
Helper to create a parse action for converting parsed datetime string to Python
datetime.datetimeParams - - fmt - format to be passed to
datetime.strptime(default="%Y-%m-%dT%H:%M:%S.%f")Example:
dt_expr = pyparsing_common.iso8601_datetime.copy() dt_expr.set_parse_action(pyparsing_common.convert_to_datetime()) print(dt_expr.parse_string("1999-12-31T23:59:59.999"))
prints:
[datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)]
- static convert_to_float(_, __, t)¶
Parse action for converting parsed numbers to Python float
- static convert_to_integer(_, __, t)¶
Parse action for converting parsed integers to Python int
- static downcaseTokens(s, l, t)¶
Deprecated since version 3.0.0: Use
downcase_tokensinstead
- static downcase_tokens(s, l, t)¶
Parse action to convert tokens to lower case.
- fnumber = fnumber¶
any int or real number, always converts to a float
- fraction = fraction¶
fractional expression of an integer divided by an integer, converts to a float
- hex_integer = hex integer¶
expression that parses a hexadecimal integer, converts to an int
- identifier = identifier¶
typical code identifier (leading alpha or ‘_’, followed by 0 or more alphas, nums, or ‘_’)
- ieee_float = ieee_float¶
any floating-point literal (int, real number, infinity, or NaN), converts to a float
- integer = integer¶
expression that parses an unsigned integer, converts to an int
- ipv4_address = IPv4 address¶
IPv4 address (
0.0.0.0 - 255.255.255.255)
- ipv6_address = IPv6 address¶
IPv6 address (long, short, or mixed form)
- iso8601_date = ISO8601 date¶
ISO8601 date (
yyyy-mm-dd)
- iso8601_date_validated = ISO8601 date¶
Validated ISO8601 date strings, raising
ParseExceptionfor invalid date values.
- iso8601_datetime = ISO8601 datetime¶
ISO8601 datetime (
yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)) - trailing seconds, milliseconds, and timezone optional; accepts separating'T'or' '
- iso8601_datetime_validated = ISO8601 datetime¶
Validated ISO8601 date and time strings, raising
ParseExceptionfor invalid date/time values.
- mac_address = MAC address¶
MAC address xx:xx:xx:xx:xx (may also have ‘-’ or ‘.’ delimiters)
- mixed_integer = fraction or mixed integer-fraction¶
mixed integer of the form ‘integer - fraction’, with optional leading integer, converts to a float
- number = number¶
any numeric expression, converts to the corresponding Python type
- real = real number¶
expression that parses a floating point number, converts to a float
- sci_real = real number with scientific notation¶
expression that parses a floating point number with optional scientific notation, converts to a float
- signed_integer = signed integer¶
expression that parses an integer with optional leading sign, converts to an int
- static stripHTMLTags(s: str, l: int, tokens: ParseResults)¶
Deprecated since version 3.0.0: Use
strip_html_tagsinstead
- static strip_html_tags(s: str, l: int, tokens: ParseResults)¶
Parse action to remove HTML tags from web page HTML source
Example:
# strip HTML links from normal text text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>' td, td_end = make_html_tags("TD") table_text = td + SkipTo(td_end).set_parse_action( pyparsing_common.strip_html_tags)("body") + td_end print(table_text.parse_string(text).body)
Prints:
More info at the pyparsing wiki page
- static upcaseTokens(s, l, t)¶
Deprecated since version 3.0.0: Use
upcase_tokensinstead
- static upcase_tokens(s, l, t)¶
Parse action to convert tokens to upper case.
- url = url¶
URL (http/https/ftp scheme)
Changed in version 3.1.0:
urlnamed group added
- uuid = UUID¶
UUID (
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)
- class pyparsing_test¶
Bases:
objectnamespace class for classes useful in writing unit tests
- class TestParseResultsAsserts(methodName='runTest')¶
Bases:
TestCaseA mixin class to add parse results assertion methods to normal unittest.TestCase classes.
- assertParseAndCheckDict(
- expr,
- test_string,
- expected_dict,
- msg=None,
- verbose=True,
Convenience wrapper assert to test a parser element and input string, and assert that the resulting
ParseResults.as_dict()is equal to theexpected_dict.
- assertParseAndCheckList(
- expr,
- test_string,
- expected_list,
- msg=None,
- verbose=True,
Convenience wrapper assert to test a parser element and input string, and assert that the resulting
ParseResults.as_list()is equal to theexpected_list.
- assertParseResultsEquals(
- result,
- expected_list=None,
- expected_dict=None,
- msg=None,
Unit test assertion to compare a
ParseResultsobject with an optionalexpected_list, and compare any defined results names with an optionalexpected_dict.
- assertRaisesParseException(
- exc_type=ParseException,
- expected_msg=None,
- msg=None,
- assertRunTestResults(
- run_tests_report,
- expected_parse_results=None,
- msg=None,
Unit test assertion to evaluate output of
run_tests().If a list of list-dict tuples is given as the
expected_parse_resultsargument, then these are zipped with the report tuples returned byrun_tests()and evaluated usingassertParseResultsEquals(). Finally, asserts that the overall :meth:~ParserElement.run_tests success value isTrue.- Parameters:
run_tests_report (tuple[bool, list[tuple[str, ParseResults | Exception]]]) – the return value from
ParserElement.run_tests()expected_parse_results (list[tuple[str | list | dict | Exception, ...]]) – (optional)
- class reset_pyparsing_context¶
Bases:
objectContext manager to be used when writing unit tests that modify pyparsing config values: - packrat parsing - bounded recursion parsing - default whitespace characters - default keyword characters - literal string auto-conversion class -
__diag__settingsExample:
ppt = pyparsing.pyparsing_test class MyTestClass(ppt.TestParseResultsAsserts): def test_literal(self): with ppt.reset_pyparsing_context(): # test that literals used to construct # a grammar are automatically suppressed ParserElement.inline_literals_using(Suppress) term = Word(alphas) | Word(nums) group = Group('(' + term[...] + ')') # assert that the '()' characters # are not included in the parsed tokens self.assertParseAndCheckList( group, "(abc 123 def)", ['abc', '123', 'def'] ) # after exiting context manager, literals # are converted to Literal expressions again
- copy()¶
- restore()¶
- save()¶
- static with_line_numbers(
- s: str,
- start_line: int | None = None,
- end_line: int | None = None,
- expand_tabs: bool = True,
- eol_mark: str = '|',
- mark_spaces: str | None = None,
- mark_control: str | None = None,
- *,
- indent: str | int = '',
- base_1: bool = True,
Helpful method for debugging a parser - prints a string with line and column numbers. (Line and column numbers are 1-based by default - if debugging a parse action, pass base_1=False, to correspond to the loc value passed to the parse action.)
- Parameters:
s – string to be printed with line and column numbers
start_line – starting line number in s to print (default=1)
end_line – ending line number in s to print (default=len(s))
expand_tabs – expand tabs to spaces, to match the pyparsing default
eol_mark – string to mark the end of lines, helps visualize trailing spaces
mark_spaces – special character to display in place of spaces
mark_control –
convert non-printing control characters to a placeholding character; valid values:
"unicode"- replaces control chars with Unicode symbols, such as “␍” and “␊”any single character string - replace control characters with given string
None(default) - string is displayed as-is
indent – string to indent with line and column numbers; if an int is passed, converted to
" " * indentbase_1 – whether to label string using base 1; if False, string will be labeled based at 0
- Returns:
input string with leading line numbers and column number headers
Changed in version 3.2.0: New
indentandbase_1arguments.
- class pyparsing_unicode¶
Bases:
unicode_setA namespace class for defining common language unicode_sets.
- class Arabic¶
Bases:
unicode_setUnicode set for Arabic Unicode Character Range
- BMP¶
alias of
BasicMultilingualPlane
- class BasicMultilingualPlane¶
Bases:
unicode_setUnicode set for the Basic Multilingual Plane
- class CJK¶
Bases:
Chinese,Japanese,HangulUnicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range
- class Chinese¶
Bases:
unicode_setUnicode set for Chinese Unicode Character Range
- class Cyrillic¶
Bases:
unicode_setUnicode set for Cyrillic Unicode Character Range
- class Devanagari¶
Bases:
unicode_setUnicode set for Devanagari Unicode Character Range
- class Greek¶
Bases:
unicode_setUnicode set for Greek Unicode Character Ranges
- class Hangul¶
Bases:
unicode_setUnicode set for Hangul (Korean) Unicode Character Range
- class Hebrew¶
Bases:
unicode_setUnicode set for Hebrew Unicode Character Range
- class Japanese¶
Bases:
unicode_setUnicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges
- class Hiragana¶
Bases:
unicode_setUnicode set for Hiragana Unicode Character Range
- class Kanji¶
Bases:
unicode_setUnicode set for Kanji Unicode Character Range
- class Katakana¶
Bases:
unicode_setUnicode set for Katakana Unicode Character Range
- class Latin1¶
Bases:
unicode_setUnicode set for Latin-1 Unicode Character Range
- class LatinA¶
Bases:
unicode_setUnicode set for Latin-A Unicode Character Range
- class LatinB¶
Bases:
unicode_setUnicode set for Latin-B Unicode Character Range
- class Thai¶
Bases:
unicode_setUnicode set for Thai Unicode Character Range
- removeQuotes(s: str, l: int, t: ParseResults) Any¶
Deprecated since version 3.0.0: Use
remove_quotesinstead
- remove_quotes(s: str, l: int, t: ParseResults) Any¶
Helper parse action for removing quotation marks from parsed quoted strings, that use a single character for quoting. For parsing strings that may have multiple characters, use the
QuotedStringclass.Example:
>>> # by default, quotation marks are included in parsed results >>> quoted_string.parse_string("'Now is the Winter of our Discontent'") ParseResults(["'Now is the Winter of our Discontent'"], {}) >>> # use remove_quotes to strip quotation marks from parsed results >>> dequoted = quoted_string().set_parse_action(remove_quotes) >>> dequoted.parse_string("'Now is the Winter of our Discontent'") ParseResults(['Now is the Winter of our Discontent'], {})
- replaceHTMLEntity(s, l, t)¶
Deprecated since version 3.0.0: Use
replace_html_entityinstead
- replaceWith(
- repl_str: Any,
Deprecated since version 3.0.0: Use
replace_withinstead
- replace_html_entity(s, l, t)¶
Helper parser action to replace common HTML entities with their special characters
- replace_with(
- repl_str: Any,
Helper method for common parse actions that simply return a literal value. Especially useful when used with
transform_string().Example:
>>> num = Word(nums).set_parse_action(lambda toks: int(toks[0])) >>> na = one_of("N/A NA").set_parse_action(replace_with(math.nan)) >>> term = na | num >>> term[1, ...].parse_string("324 234 N/A 234") ParseResults([324, 234, nan, 234], {})
- show_best_practices(file=sys.stdout) str | None¶
Load and return the project’s best practices.
Example:
>>> import pyparsing as pp >>> pp.show_best_practices() <!-- This file contains instructions for best practices for developing parsers with pyparsing, and can be used by AI agents when generating Python code using pyparsing. --> ...
This can also be run from the command line:
python -m pyparsing.ai.show_best_practices
- srange(s: str) str¶
Helper to easily define string ranges for use in
Wordconstruction. Borrows syntax from regexp'[]'string range definitions:srange("[0-9]") -> "0123456789" srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_"
The input string must be enclosed in []’s, and the returned string is the expanded character set joined into a single string. The values enclosed in the []’s may be:
a single character
an escaped character with a leading backslash (such as
\-or\])an escaped hex character with a leading
'\x'(\x21, which is a'!'character) (\0x##is also supported for backwards compatibility)an escaped octal character with a leading
'\0'(\041, which is a'!'character)a range of any of the above, separated by a dash (
'a-z', etc.)any combination of the above (
'aeiouy','a-zA-Z0-9_$', etc.)
- testing¶
alias of
pyparsing_test
- tokenMap(
- func,
- *args,
Deprecated since version 3.0.0: Use
token_mapinstead
- token_map(
- func,
- *args,
Helper to define a parse action by mapping a function to all elements of a
ParseResultslist. If any additional args are passed, they are forwarded to the given function as additional arguments after the token, as inhex_integer = Word(hexnums).set_parse_action(token_map(int, 16)), which will convert the parsed data to an integer using base 16.Example (compare the last to example in
ParserElement.transform_string:hex_ints = Word(hexnums)[1, ...].set_parse_action(token_map(int, 16)) hex_ints.run_tests(''' 00 11 22 aa FF 0a 0d 1a ''') upperword = Word(alphas).set_parse_action(token_map(str.upper)) upperword[1, ...].run_tests(''' my kingdom for a horse ''') wd = Word(alphas).set_parse_action(token_map(str.title)) wd[1, ...].set_parse_action(' '.join).run_tests(''' now is the winter of our discontent made glorious summer by this sun of york ''')
prints:
00 11 22 aa FF 0a 0d 1a [0, 17, 34, 170, 255, 10, 13, 26] my kingdom for a horse ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] now is the winter of our discontent made glorious summer by this sun of york ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York']
- traceParseAction(
- f: Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any],
Deprecated since version 3.0.0: Use
trace_parse_actioninstead
- trace_parse_action(
- f: Callable[[], Any] | Callable[[ParseResults], Any] | Callable[[int, ParseResults], Any] | Callable[[str, int, ParseResults], Any],
Decorator for debugging parse actions.
When the parse action is called, this decorator will print
">> entering method-name(line:<current_source_line>, <parse_location>, <matched_tokens>)". When the parse action completes, the decorator will print"<<"followed by the returned value, or any exception that the parse action raised.Example:
wd = Word(alphas) @trace_parse_action def remove_duplicate_chars(tokens): return ''.join(sorted(set(''.join(tokens)))) wds = wd[1, ...].set_parse_action(remove_duplicate_chars) print(wds.parse_string("slkdjs sld sldd sdlf sdljf"))
prints:
>>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, ParseResults(['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) <<leaving remove_duplicate_chars (ret: 'dfjkls') ['dfjkls']Changed in version 3.1.0: Exception type added to output
- ungroup(expr: ParserElement) ParserElement¶
Helper to undo pyparsing’s default grouping of And expressions, even if all but one are non-empty.
- unicode¶
alias of
pyparsing_unicode
- class unicode_set¶
Bases:
objectA set of Unicode characters, for language-specific strings for
alphas,nums,alphanums, andprintables. A unicode_set is defined by a list of ranges in the Unicode character set, in a class attribute_ranges. Ranges can be specified using 2-tuples or a 1-tuple, such as:_ranges = [ (0x0020, 0x007e), (0x00a0, 0x00ff), (0x0100,), ]
Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x).
A unicode set can also be defined using multiple inheritance of other unicode sets:
class CJK(Chinese, Japanese, Korean): pass
- alphanums = ''¶
- alphas = ''¶
- identbodychars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyzªµ·ºÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ'¶
- identchars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyzªµºÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ'¶
- identifier = W:(A-Z_a-zªµºÀ-Ö..., 0-9A-Z_a-zªµ·...)¶
- nums = ''¶
- printables = ''¶
- withAttribute(
- *args: tuple[str, str],
- **attr_dict,
Deprecated since version 3.0.0: Use
with_attributeinstead
- withClass(
- classname: str,
- namespace: str = '',
Deprecated since version 3.0.0: Use
with_classinstead
- with_attribute(
- *args: tuple[str, str],
- **attr_dict,
Helper to create a validating parse action to be used with start tags created with
make_xml_tagsormake_html_tags. Usewith_attributeto qualify a starting tag with a required attribute value, to avoid false matches on common tags such as<TD>or<DIV>.Call
with_attributewith a series of attribute names and values. Specify the list of filter attributes names and values as:keyword arguments, as in
(align="right"), oras an explicit dict with
**operator, when an attribute name is also a Python reserved word, as in**{"class":"Customer", "align":"right"}a list of name-value tuples, as in
(("ns1:class", "Customer"), ("ns2:align", "right"))
For attribute names with a namespace prefix, you must use the second form. Attribute names are matched insensitive to upper/lower case.
If just testing for
class(with or without a namespace), usewith_class.To verify that the attribute exists, but without specifying a value, pass
with_attribute.ANY_VALUEas the value.The next two examples use the following input data and tag parsers:
html = ''' <div> Some text <div type="grid">1 4 0 1 0</div> <div type="graph">1,3 2,3 1,1</div> <div>this has no type</div> </div> ''' div,div_end = make_html_tags("div")
Only match div tag having a type attribute with value “grid”:
div_grid = div().set_parse_action(with_attribute(type="grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.search_string(html): print(grid_header.body)
prints:
1 4 0 1 0
Construct a match with any div tag having a type attribute, regardless of the value:
div_any_type = div().set_parse_action( with_attribute(type=with_attribute.ANY_VALUE) ) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.search_string(html): print(div_header.body)
prints:
1 4 0 1 0 1,3 2,3 1,1
- with_class(
- classname: str,
- namespace: str = '',
Simplified version of
with_attribute()when matching on a div class - made difficult becauseclassis a reserved word in Python.Using similar input data to the
with_attribute()examples:html = ''' <div> Some text <div class="grid">1 4 0 1 0</div> <div class="graph">1,3 2,3 1,1</div> <div>this <div> has no class</div> </div> ''' div,div_end = make_html_tags("div")
Only match div tag having the “grid” class:
div_grid = div().set_parse_action(with_class("grid")) grid_expr = div_grid + SkipTo(div | div_end)("body") for grid_header in grid_expr.search_string(html): print(grid_header.body)
prints:
1 4 0 1 0
Construct a match with any div tag having a class attribute, regardless of the value:
div_any_type = div().set_parse_action( with_class(withAttribute.ANY_VALUE) ) div_expr = div_any_type + SkipTo(div | div_end)("body") for div_header in div_expr.search_string(html): print(div_header.body)
prints:
1 4 0 1 0 1,3 2,3 1,1
Module pyparsing.diagram¶
- class AnnotatedItem(*args: Any, **kwargs: Any)¶
Bases:
GroupSimple subclass of Group that creates an annotation label
- class ConverterState(diagram_kwargs: dict | None = None)¶
Bases:
objectStores some state that persists between recursions into the element tree
- diagram_kwargs: dict¶
Shared kwargs that are used to customize the construction of diagrams
- diagrams: dict[int, EditablePartial[NamedDiagram]]¶
A dictionary mapping ParserElement IDs to subdiagrams generated from them
- extract_into_diagram(el_id: int)¶
Used when we encounter the same token twice in the same tree. When this happens, we replace all instances of that token with a terminal, and create a new subdiagram for the token
- generate_index() int¶
Generate a number used to index a diagram
- get(key, default=None)¶
- index: int¶
The index of the next element. This is used for sorting
- index_generator = count(1)¶
- class EachItem(*args: Any, **kwargs: Any)¶
Bases:
GroupCustom railroad item to compose a:
railroad.Groupcontaining arailroad.OneOrMorecontaining arailroad.Choiceof the elements in therailroad.Each
with the group label indicating that all must be matched
- all_label = '[ALL]'¶
- class EditablePartial(func: Callable[[...], T], args: list, kwargs: dict)¶
Bases:
Generic[T]Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn’t yet been constructed.
- classmethod from_call(
- func: Callable[[...], T],
- *args,
- **kwargs,
If you call this function in the same way that you would call the constructor, it will store the arguments as you expect. For example
EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3)
- property name¶
- class ElementState(
- element: ParserElement,
- converted: EditablePartial,
- parent: EditablePartial,
- number: int,
- name: str = None,
- parent_index: int | None = None,
- extract: bool = False,
- complete: bool = False,
Bases:
objectState recorded for an individual pyparsing Element
- complete: bool = False¶
If true, all of this element’s children have been filled out
- converted: EditablePartial¶
The output Railroad element in an unconverted state
- element: ParserElement¶
The pyparsing element that this represents
- extract: bool = False¶
If true, we should extract this out into a subdiagram
- mark_for_extraction(
- el_id: int,
- state: ConverterState,
- name: str = None,
- force: bool = False,
Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram :param el_id: id of the element :param state: element/diagram state tracker :param name: name to use for this element’s text :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the root element when we know we’re finished
- name: str = None¶
The name of the element
- number: int¶
The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
- parent: EditablePartial¶
The parent Railroad element, which we store so that we can extract this if it’s duplicated
- parent_index: int | None = None¶
The index of this inside its parent
- class NamedDiagram(name: str, index: int, diagram: railroad.DiagramItem = None)¶
Bases:
objectA simple structure for associating a name with a railroad diagram
- property bookmark¶
- diagram: railroad.DiagramItem = None¶
- index: int¶
- name: str¶
- railroad_to_html(diagrams: list[NamedDiagram], embed=False, **kwargs) str¶
Given a list of
NamedDiagram, produce a single HTML string that visualises those diagrams.- Params kwargs:
kwargs to be passed in to the template
- resolve_partial(
- partial: EditablePartial[T],
Recursively resolves a collection of Partials into whatever type they are
- to_railroad(
- element: ParserElement,
- diagram_kwargs: dict | None = None,
- vertical: int = 3,
- show_results_names: bool = False,
- show_groups: bool = False,
- show_hidden: bool = False,
Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram creation if you want to access the Railroad tree before it is converted to HTML
- Parameters:
element – base element of the parser being diagrammed
diagram_kwargs – kwargs to pass to the
Diagram()constructorvertical – (optional) int - limit at which number of alternatives should be shown vertically instead of horizontally
show_results_names – bool to indicate whether results name annotations should be included in the diagram
show_groups – bool to indicate whether groups should be highlighted with an unlabeled surrounding box
show_hidden – bool to indicate whether internal elements that are typically hidden should be shown