Is it possible for the post-lexer to consume two tokens and then yield?
See original GitHub issueI want to have a post-lexer that’s essentially looks like this (with more bits of code inserted in between!)?
class PostLexer:
def __init__(self):
self.always_accept = ()
def process(self, stream):
for tok in stream:
tok2 = next(stream)
yield tok
yield tok2
The way this code is, the parser doesn’t work correctly. I get “no terminal defined for…” errors where I do not if I swap the tok2 = next(stream)
line with the yield tok
line.
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
What Is Yield Farming? The Rocket Fuel of DeFi, Explained
If the terms “yield farming,” “DeFi” and “liquidity mining” and are all Greek to you, ... Tokens proved to be the big use...
Read more >What Is Yield Farming? What You Need To Know - Blockworks
Borrowing: Farmers can use one token as collateral and receive a loan of another. Users can then farm yield with the borrowed coins....
Read more >"Unused" terminals are filtered out before post-lexer ... - GitHub
I used a custom post-lexer (written by me) to transform certain tokens ... Token class PostLexer: def process(self, stream): yield from ...
Read more >What are DeFi yield aggregators, and how do they work?
Yield farming is one field of DeFi that allows crypto investors to earn rewards by moving their tokens to yield-generating smart contracts.
Read more >Guide to Yield Farming & Staking Crypto Assets - Medium
As explained above, the main reason is to attract liquidity to the token pairs on DEXes, which benefits both the DEX and also...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I assume you’re using LALR.
LALR by default uses the contextual lexer, which depends on the state of the parser to tokenize. Changing the order means it tries to determine a terminal before the parser advanced to the next state.
If you have to do it this way, for whatever reason, you can try to use
lexer="standard"
. It will revert to the traditional YACC/PLY lexer, which doesn’t care about the parser state. However, that means that you’re losing a bit of parsing power, and might experience more collisions.Yes. I guess that’s what I’m going to do then. Thanks.