Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

90 keywords word count restriction #96

Merged
merged 4 commits into from
Dec 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 24 additions & 7 deletions src/lib/server/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ export async function chatWithLLMByDocs(
history: LLMChatMessage[],
task: string,
subtasks: string[],
subtaskCompleted: boolean[],
resources: Resource[],
temperature = 0.7
): Promise<{
Expand All @@ -202,8 +203,12 @@ export async function chatWithLLMByDocs(
})
.join('\n\n');

const formattedSubtasks = subtasks.map((subtask, index) => {
return subtaskCompleted[index] ? `(完成)${subtask}` : `(未完成)subtask`;
});

const system_prompt = DOCS_CONTEXT_SYSTEM_PROMPT.replace('{task}', task)
.replace('{subtasks}', subtasks.join('\n'))
.replace('{subtasks}', formattedSubtasks.join('\n'))
.replace('{resources}', formatted_docs);

const [response, subtask_completed, moderation, off_topic] = await Promise.all([
Expand Down Expand Up @@ -394,9 +399,12 @@ export async function summarizeConcepts(
}
}

export async function summarizeGroupOpinions(
student_opinion: StudentSpeak[]
): Promise<{ success: boolean; summary: string; keywords: string[]; error?: string }> {
export async function summarizeGroupOpinions(student_opinion: StudentSpeak[]): Promise<{
success: boolean;
summary: string;
keywords: Record<string, number>;
error?: string;
}> {
try {
const formatted_opinions = student_opinion
.filter((opinion) => opinion.role !== '摘要小幫手')
Expand All @@ -409,7 +417,12 @@ export async function summarizeGroupOpinions(
);
const summary_group_opinion_schema = z.object({
group_summary: z.string(),
group_key_points: z.array(z.string())
group_keywords: z.array(
z.object({
keyword: z.string(),
strength: z.number()
})
)
});

const response = await requestZodLLM(system_prompt, summary_group_opinion_schema);
Expand All @@ -419,18 +432,22 @@ export async function summarizeGroupOpinions(
}

const message = response.message as z.infer<typeof summary_group_opinion_schema>;
const formatted_keywords = message.group_keywords.reduce(
(acc, keyword) => ({ ...acc, [keyword.keyword]: keyword.strength }),
{} as Record<string, number>
);

return {
success: true,
summary: message.group_summary,
keywords: message.group_key_points
keywords: formatted_keywords
};
} catch (error) {
console.error('Error in summarizeGroupOpinions:', error);
return {
success: false,
summary: '',
keywords: [],
keywords: {},
error: 'Failed to summarize group opinions'
};
}
Expand Down
8 changes: 6 additions & 2 deletions src/lib/server/prompt.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,11 @@ export const DOCS_CONTEXT_SYSTEM_PROMPT = `
3. 根據提供的文件引導學生達到次要目標
4. 確保回答與原始資料保持一致且準確
5. 不能直接主觀的給出答案,要引導學生自己思考
6. 如果學生回答錯誤,要引導學生找到錯誤的原因並修正
7. 永遠不要否定學生的答案,但不能偏離文件希望達到的目標
6. 請不要直接說出文件中的內容,或是說你是參考文件的內容
7. 如果學生回答錯誤,要引導學生找到錯誤的原因並修正
8. 永遠不要否定學生的答案,但不能偏離文件希望達到的目標
9. 引導時先以未完成的次要目標為主
10. 如果次要目標都達成了,請繼續跟學生對話,加深學生對於主要問題的理解

主要問題:
{task}
Expand Down Expand Up @@ -58,6 +61,7 @@ export const CONCEPT_SUMMARY_PROMPT = `

export const GROUP_OPINION_SUMMARY_PROMPT = `
請總結以下學生們的小組觀點、想法和結論:
學生的關鍵字請以以單詞的方式提供,並給予每個單詞的在討論中重要度(1-5),數字越大代表詞彙越重要。

{groupOpinions}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,13 @@ export const GET: RequestHandler = async ({ params, locals }) => {
const group_ref = getGroupRef(id, group_number);
const group_data = await getGroupData(group_ref);

const intro = await chatWithLLMByDocs([], task, subtasks, resources);
const intro = await chatWithLLMByDocs(
[],
task,
subtasks,
new Array(subtasks.length).fill(false),
resources
);
if (!intro) {
throw error(500, 'Error generating intro message');
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ export const POST: RequestHandler = async ({ request, params, locals }) => {
throw error(400, 'Missing parameters');
}

const conversation_ref = await getConversationRef(id, group_number, conv_id);
const conversation_ref = getConversationRef(id, group_number, conv_id);
console.log('Retrieved conversation reference');
const { userId, task, subtasks, resources, history, warning, subtaskCompleted } =
await getConversationData(conversation_ref);
Expand Down Expand Up @@ -58,6 +58,7 @@ export const POST: RequestHandler = async ({ request, params, locals }) => {
[...chat_history, { role: 'user', content: content }],
task,
subtasks,
subtaskCompleted,
resources
);
console.log('Received LLM response', {
Expand Down
Loading