Skip to content

Commit

Permalink
new prompt topic
Browse files Browse the repository at this point in the history
  • Loading branch information
fscelliott committed Nov 8, 2024
1 parent b5115b9 commit a706a7b
Show file tree
Hide file tree
Showing 5 changed files with 70 additions and 165 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -58,25 +58,21 @@ Parameters



TODO: add categories like in the draf-tprompt



| key | value | description | interactions |
| :--------------------- | :---------------------- | :----------------------------------------------------------- | ------------ |
| id (**required**) | `queryGroup` | | |
| queries | array of objects | An array of query objects, where each extracts a single fact and outputs a single field. Each query contains the following parameters:<br/>`id` (**required**) - The ID for the extracted field. <br/>`description` (**required**) - A free-text question about information in the document. For example, `"what's the policy period?"` or `"what's the client's first and last name?"`. For more information about how to write questions (or "prompts"), see [Query Group](https://docs.sensible.so/docs/query-group-tips) extraction tips. | |
| chunkScoringText | string | Use this parameter to narrow down the page location of the answer to your prompt. For details about context and chunks, see the Notes section.<br/>A representative snippet of text from the part of the document where you expect to find the answer to your prompt. For example, if your prompt has multiple candidate answers, and the correct answer is located near unique or distinctive text that's difficult to incorporate into your question, then specify the distinctive text in this parameter.<br/>If specified, Sensible uses this text to score chunks' relevancy. If unspecified, Sensible uses the prompt to score chunks.<br/>Sensible recommends that the snippet is specific to the target chunk, semantically similar to the chunk, and structurally similar to the chunk. <br/>For example, if the chunk contains a street address formatted with newlines, then provide a snippet with an example street address that contains newlines, like `123 Main Street\nLondon, England`. If the chunk contains a street address in a free-text paragraph, then provide an unformatted street address in the snippet. | |
| multimodalEngine | object | Configure this parameter to:<br/>- Extract data from images embedded in a document, for example, photos, charts, or illustrations.<br/>- Troubleshoot extracting from complex text layouts, such as overlapping lines, lines between lines, and handwriting. For example, use this as an alternative to the [Signature](doc:signature) method, the [Nearest Checkbox](doc:nearest-checkbox) method, the [OCR engine](doc:ocr-engine), and line [preprocessors](doc:preprocessors).<br/><br/>This parameter sends an image of the document region containing the target data to a multimodal LLM (GPT-4 Vision Preview), so that you can ask questions about text and non-text images. This bypasses Sensible's [OCR](doc:ocr) and direct-text extraction processes for the region. Note that this option doesn't support confidence signals.<br/>This parameter has the following parameters:<br/><br/>`region`: The document region to send as an image to the multimodal LLM. Configurable with the following options :<br/><br/>- To automatically select the [context](doc:query-group#notes) as the region, specify `"region": "automatic"`. If you configure this option for a non-text image, then help Sensible locate the context by including queries in the group that target text near the image, or by specifying the nearby text in the Chunk Scoring Text parameter. <br/><br/>- To manually specify a region, specify an [anchor](doc:anchor) close to the region you want to capture. Specify the region's dimensions in inches relative to the anchor using the [Region](doc:region) method's parameters, for example:<br/>`"region": { `<br/> `"start": "below",`<br/> `"width": 8,`<br/> `"height": 1.2,`<br/> `"offsetX": -2.5,`<br/> `"offsetY": -0.25`<br/> `}` | |
| llmEngine | object | Where applicable, configures the LLM engine Sensible uses to answer your prompts. <br/>Configure this parameter to troubleshoot situations in which Sensible correctly identifies the part of the document that contains the answers to your prompts, but the LLM's answer contains problems. For example, Sensible returns an LLM error because the answer isn't properly formatted, or the LLM doesn't follow instructions in your prompt.<br/><br/>Contains the following parameters:<br/>`provider`: <br/>- If set to `open-ai` (default), Sensible uses GPT-4o mini where not hard coded. See the Notes section for more information. <br/> - If set to `anthropic`, Sensible uses Claude 3 Haiku where not hard coded. See the Notes section for more information. | |
| searchBySummarization | boolean. default: false | or information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| confidenceSignals | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt). | |
| contextDescription | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| pageHinting | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| chunkCount | integer. default: 5 | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| chunkSize | integer. default: 0.5 | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| chunkOverlapPercentage | integer. default: 0.5 | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| pageRange | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| key | value | description | interactions |
| :--------------------- | :--------------- | :----------------------------------------------------------- | ------------------------------------------------------------ |
| id (**required**) | `queryGroup` | | |
| queries | array of objects | An array of query objects, where each extracts a single fact and outputs a single field. Each query contains the following parameters:<br/>`id` (**required**) - The ID for the extracted field. <br/>`description` (**required**) - A free-text question about information in the document. For example, `"what's the policy period?"` or `"what's the client's first and last name?"`. For more information about how to write questions (or "prompts"), see [Query Group](https://docs.sensible.so/docs/query-group-tips) extraction tips. | |
| chunkScoringText | string | Use this parameter to narrow down the page location of the answer to your prompt. For details about context and chunks, see the Notes section.<br/>A representative snippet of text from the part of the document where you expect to find the answer to your prompt. For example, if your prompt has multiple candidate answers, and the correct answer is located near unique or distinctive text that's difficult to incorporate into your question, then specify the distinctive text in this parameter.<br/>If specified, Sensible uses this text to score chunks' relevancy. If unspecified, Sensible uses the prompt to score chunks.<br/>Sensible recommends that the snippet is specific to the target chunk, semantically similar to the chunk, and structurally similar to the chunk. <br/>For example, if the chunk contains a street address formatted with newlines, then provide a snippet with an example street address that contains newlines, like `123 Main Street\nLondon, England`. If the chunk contains a street address in a free-text paragraph, then provide an unformatted street address in the snippet. | If you set the Search By Summarization paramter to true, Sensible ignores any configured value for this parameter. |
| multimodalEngine | object | Configure this parameter to:<br/>- Extract data from images embedded in a document, for example, photos, charts, or illustrations.<br/>- Troubleshoot extracting from complex text layouts, such as overlapping lines, lines between lines, and handwriting. For example, use this as an alternative to the [Signature](doc:signature) method, the [Nearest Checkbox](doc:nearest-checkbox) method, the [OCR engine](doc:ocr-engine), and line [preprocessors](doc:preprocessors).<br/><br/>This parameter sends an image of the document region containing the target data to a multimodal LLM (GPT-4 Vision Preview), so that you can ask questions about text and non-text images. This bypasses Sensible's [OCR](doc:ocr) and direct-text extraction processes for the region. <br/>This parameter has the following parameters:<br/><br/>`region`: The document region to send as an image to the multimodal LLM. Configurable with the following options :<br/><br/>- To automatically select the [context](doc:query-group#notes) as the region, specify `"region": "automatic"`. If you configure this option for a non-text image, then help Sensible locate the context by including queries in the group that target text near the image, or by specifying the nearby text in the Chunk Scoring Text parameter. <br/><br/>- To manually specify a region, specify an [anchor](doc:anchor) close to the region you want to capture. Specify the region's dimensions in inches relative to the anchor using the [Region](doc:region) method's parameters, for example:<br/>`"region": { `<br/> `"start": "below",`<br/> `"width": 8,`<br/> `"height": 1.2,`<br/> `"offsetX": -2.5,`<br/> `"offsetY": -0.25`<br/> `}` | If you configure this parameter, Sensible doesn't support confidence signals for the multimodal output. |
| llmEngine | object | Where applicable, configures the LLM engine Sensible uses to answer your prompts. <br/>Configure this parameter to troubleshoot situations in which Sensible correctly identifies the part of the document that contains the answers to your prompts, but the LLM's answer contains problems. For example, Sensible returns an LLM error because the answer isn't properly formatted, or the LLM doesn't follow instructions in your prompt.<br/><br/>Contains the following parameters:<br/>`provider`: <br/>- If set to `open-ai` (default), Sensible uses GPT-4o mini where not hard coded. See the Notes section for more information. <br/> - If set to `anthropic`, Sensible uses Claude 3 Haiku where not hard coded. See the Notes section for more information. | |
| searchBySummarization | | or information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| confidenceSignals | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt). | |
| contextDescription | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| pageHinting | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| chunkCount | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| chunkSize | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| chunkOverlapPercentage | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |
| pageRange | | For information about this parameter, see [Advanced LLM prompt configuration](doc:prompt#parameters). | |

## Examples

Expand Down
Loading

0 comments on commit a706a7b

Please sign in to comment.