Frequently asked questions

AI related questions

 

Q: What type of data does the AI need to be trained / calibrated?

This depends on the used features:

  • Smart Check: No data is needed.
  • Playbooks: Questions and minor answer configuration. Instruction details (certain aspects to be considered) for better results.
  • Formal Check: Clauses and Assessments

Please also see the answer to “Can I train the model further with my feedback on the outcomes?”

 

Q: Can I train the model further with my feedback on the outcomes?

You are giving valuable information to the system with each assessment you are doing – independent from the functionality you are using. If you adopt an AI answer or answer by yourself in the Playbook, if you click on “thumbs up” or “thumbs down” in the Smart Check or if you accept / decline a text in the Clause Check – your assessments are stored, post processed and are or will be available for future evaluations within your tenant.
Important note: The currently used LLMs are not using your documents for training due to data privacy reasons.

 

Q: What is the average hallucination ratio?

Thats a question with no "simple" answer, because the application offers different "AI functions" and each module is using it differently. 

The Smart Check and the Playbook Check use LLM technologies to analyze the content of the uploaded documents – at this point, there might be a low risk of hallucinating:

  • While answering the questions of a playbook
  • While generating the "Adjustment proposal" in a Smart Check

 

Q: Why do I get different results from the [Smart Check | Playbook Check]

Large language models work by predicting text probabilistically, so even with the same document, the output can vary. Changing settings like the language can lead to different numbers of issues being flagged.
This variability is inherent to how large language models work and is a common effect of such AIs. We are continuously working on fine-tuning these parameters to improve consistency. Despite our efforts, some degree of variation is expected as part of the model's nature. This is an ongoing challenge in using large language models for tasks like legal analysis.

 

Q: Which LLM does Contract Insights use?

We are using Anthrophic Titan V3 (at AWS) to generate embeddings.

For prompt execution we are running OpenAI GPT-4o (at Microsoft Azure). 

 

Currently under evaluation is Anthrophic Claude as well as Mistral.

 

Q: Can I bring my own LLM?

This feature is currently under development.

 

 


 

 

Usage related questions

 

Q: Smart Check: How can I indicate (or can the system detect it itself) that we are on the buyer or seller sides (in an M&A for example), as you will want to evaluate the risk differently? 

If you upload a smart check, the pre-analysis takes the first page of your document and tries to find the two parties. You see the results in the fields "Own party" and "Other parties" - and you can switch them or type in on your own.

 

The Smart Check feature is advised to evaluate in favor of the "Own party" and search for risks especially in the "own party" point of view.

If the pre-analysis is mistaken or the first page does not include information on the sides, simply make sure you type in the favored side into "own party".