This project implements a lexical analyzer using Lex. It processes defined tokens to analyze a specific language, identifying keywords, operators, and important syntactic structures.
Here are the tokens supported by the lexical analyzer, along with their models and attributes:
Token | Model | Attribute |
---|---|---|
FIN |
EOF |
\0 |
PV |
; |
- |
IF |
if |
- |
THEN |
then |
- |
ELSE |
else |
- |
END |
end |
- |
REPEAT |
repeat |
- |
UNTIL |
until |
- |
ID |
L(_?|(L|C))* |
NOM |
READ |
read |
- |
WRITE |
write |
- |
OPREL |
< | = |
COPREL ∈ {INF, EGAL} |
OPADD |
+ | - |
COPADD ∈ {PLUS, MOINS} |
OPMUL |
* | / |
COPMUL ∈ {PROD, DIV} |
PARG |
( |
- |
PARD |
) |
- |
ENTIER |
C+ |
VAL |
AFFECT |
:= |
- |
- The regular expressions
L
andC
correspond to[A-Za-z]
and[0-9]
, respectively.
- Lexical Analysis: Identifies tokens in a textual input.
- Extended Support: Handles operators, identifiers, and integers.
- Extensible: Tokens can be modified or extended to fit other requirements.
-
Clone the Repository
git clone https://github.com/your_username/lexical-analyzer.git cd lexical-analyzer
-
Compile the Lex File
flex lexer.l gcc lex.yy.c -o lexical-analyzer -lfl
-
Run the Lexical Analyzer
./lexical-analyzer < input.txt
This project is open-source and available under the MIT License.