[Security AI Assistant] Bedrock prompt updates #213160
Open
+14
−5
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
When given a complex prompt instructing multiple tool use with Bedrock selected, the assistant would give a partial response https://smith.langchain.com/public/2ba23ac9-fd60-4eb4-ad3b-e7cce96b53a9/r
I noticed this was happening when multiple tool outputs included formatted steps. I added an instruction to the system prompt to
Ensure that the final response always includes all instructions from the tool responses. Never omit earlier parts of the response.
and this seems to have improved the response: https://smith.langchain.com/public/9756d4c9-a0d0-4558-a613-331bea8974d0/rI ran ESQL regression suite for Sonnet 3.5 and Sonnet 3.7. The correctness for 3.5 remained 94% while 3.7 went from 90% to 100%. 😮
Additionally, I noticed the title being generated was extremely long and actually answering the user's question: https://smith.langchain.com/public/5483cda3-10fa-4388-9c53-666ef27ac43f/r
I entirely redid the title prompt because small changes were not fixing the issue. Claude tends to follow clear and structured instructions well but can sometimes try to be "helpful" by answering anyway. The refined version enforces compliance by using strong prohibitive language (explicitly forbidding answers), failure consequences (stating that any extra output is a failure), step-by-step clarity (breaking down the process), and removing loopholes (ensuring no additional text is allowed). These changes eliminate ambiguity and force Claude to follow the instructions strictly. This seems to have resolved the issue: https://smith.langchain.com/public/60b2028c-a1b8-4ed9-886a-e319645448fd/r