Max Chunk size created for the final chunk returned from the SementicChunker.split_document() #28249
Unanswered
akash97715
asked this question in
Q&A
Replies: 2 comments
-
@dosubot please respond |
Beta Was this translation helpful? Give feedback.
0 replies
-
@dosu please response |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
Description
I am using the above sementic chunking for langchain to create the chunk. I just want to undesrtand what can be the max size of chunk returned from this function. Is it greater than 8k token?? if yes then i need to do some postprocessing to hit Azure embdding endpoint. This length piece is missing in current document.
System Info
Langchain latest
Beta Was this translation helpful? Give feedback.
All reactions