Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 40 additions & 21 deletions docs/advanced/chat_functions/info.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,57 @@
# Chat Functions - More information

Chat functions are the microservices that Lambda Feedback calls to provide the underlying functionality of a chatbot. Students can chat with the chatbots and ask for help or further explanations regarding the Question that they are working on. Each chatbot has its own personality and approach to assisting the students.
Chat functions are the microservices that Lambda Feedback calls to provide the underlying functionality of a chatbot. Each chat function connects to a [Large Language Model (LLM)](https://uit.stanford.edu/service/techtraining/ai-demystified/llm) configured with its own role prompt, giving each chatbot its own personality and approach to assisting students.

The chatbots have at their basis a [Large Language Model (LLM)](https://en.wikipedia.org/wiki/Large_language_model) which received information regarding:
For a higher-level description of the chatbots currently available and the context each one is given about the student and question, see the [Chatbots guide for teachers](../../teacher/guides/chatbots.md) (or the [student-facing version](../../student/chatbots.md)).

- The raw markdown content of the question the student is on currently, including:
- The question name, number and content
- The final answer, structured tutorial, and worked solutions of the question
- The guidance (blurb and time estimate) from the teacher for the question
- The set name, number and description
- All parts with their number, content and done status (current part emphasised)
- All response areas and their respective expected answers
- The progress of the student on all parts of the Question, including:
- The total number of responses and the number of wrong responses the student has made for each response area
- The last responses the student has made for each response area and the received feedback
- The time duration the student has spent on the respective question and current part on that day
## Deployed chatbots
<!-- TODO: This information needs to be autoloaded in a similar manner to evaluation function documentation -->

---
The role prompts and key behaviours of the chatbots currently deployed on Lambda Feedback are described below.

## Available Chat functions
### Informational Chatbot

Currently the students have access to the following chat functions that host their own specific chatbot. Many others are in development.
**Role:** A patient AI tutor focused on student-centred learning — aims to foster critical thinking, active engagement, and confidence-building.

Click on the links below for information on each chatbot:
**Key behaviours from the role prompt**

[1. Informational Chatbot](https://github.com/lambda-feedback/informationalChatFunction/blob/main/docs/user.md)
- **Step-by-step guidance:** breaks problems into smaller steps and offers hints or intermediate steps before giving the final answer. Will share the complete answer if the student explicitly asks, but only after first encouraging exploration.
- **Error reflection:** treats mistakes as opportunities — helps the student work out *why* something went wrong rather than silently correcting it.
- **Awareness of materials:** grounds responses in the question's content, answer, worked solution, and the teacher's guidance — paraphrasing rather than quoting verbatim.
- **Adaptive support:** if the student keeps struggling, evaluates their progress and time spent on the question and gradually offers more detailed and specific guidance.
- **Engagement:** ends interactions with a question to keep the dialogue going and gauge comprehension.
- **Praise** is reserved for genuine effort or breakthroughs to avoid sounding insincere.
- **Stays on topic:** politely redirects students who ask about unrelated material.

[More details and source →](https://github.com/lambda-feedback/informationalChatFunction)

[2. Concise Chatbot](https://github.com/lambda-feedback/conciseChatFunction/blob/main/docs/user.md)
### Concise Chatbot

**Role:** A tutor that gives short, direct answers.

[3. Reflective Chatbot](https://github.com/lambda-feedback/reflectiveChatFunction/blob/main/docs/user.md)
**Key behaviours from the role prompt**

- **Direct and minimal:** answers the question and stops — no extra details, explanations, or examples unless the student asks.
- **Aware of struggle:** if the student seems stuck or frustrated, references their progress so far and how long they have spent on the question relative to the teacher's guidance time.
- **Stays on topic:** redirects unrelated questions back to the current material with a short refusal.
- **No filler:** does not end messages with concluding statements or summaries.

[More details and source →](https://github.com/lambda-feedback/conciseChatFunction)

### Reflective Chatbot

**Role:** A Socratic tutor that guides students to discover knowledge through questioning rather than direct instruction.

**Key behaviours from the role prompt**

- **Always ends with a question:** every response finishes with a follow-up question that pushes the student's thinking forward.
- **Counter-questions over answers:** when a student asks a direct question, responds with a question that guides them toward the answer rather than handing it over. If it does share a fact, it immediately follows with a question that asks the student to apply or extend it.
- **Uses a varied question toolkit:** clarifying ("What do you mean by…?"), assumption-probing ("What are you assuming here?"), evidence-based, perspective, implication, and meta-questions about why a question matters.
- **Diagnoses where the student is stuck:** if a student is frustrated, asks about their thought process to locate the gap, drawing on their progress and time spent.
- **Never provides complete answers:** always leaves room for the student to think and respond.

[More details and source →](https://github.com/lambda-feedback/reflectiveChatFunction)

## Chat Function Development

Are you interested in developing your own chatbot? Then check out the [Quickstart guide](quickstart.md) to develop and deploy your own AI chat function for Lambda Feedback.
Are you interested in developing your own chatbot? Check out the [Quickstart guide](quickstart.md) to develop and deploy your own AI chat function for Lambda Feedback.
129 changes: 110 additions & 19 deletions docs/advanced/chat_functions/local.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
# Running and Testing Chat function Locally

You can run the Python function for your chat function itself by writing a `main()` function, or you can call the [`testbench_prompts.py`](https://github.com/lambda-feedback/lambda-chat/blob/main/src/agents/utils/testbench_prompts.py) script that runs a similar pipeline to the `module.py`.
## Run Unit Tests

You can run the unit tests using `pytest`:

```bash
python src/agents/utils/testbench_prompts.py
pytest
```

You can also use the `test_prompts.py` script to test the chat function with example inputs from Lambda Feedback questions and synthetic conversations.
## Run the Chat Script

You can use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic chats:

```bash
python src/agents/utils/test_prompts.py
python tests/manual_agent_run.py
```

## Testing using the Docker Image [:material-docker:](https://www.docker.com/)
Expand Down Expand Up @@ -44,35 +49,121 @@ This will start the evaluation function and expose it on port `8080` and it will
```bash
curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations' \
--header 'Content-Type: application/json' \
--data '{"body":"{\"message\": \"hi\", \"params\": {\"conversation_id\": \"12345Test\", \"conversation_history\": [{\"type\": \"user\", \"content\": \"hi\"}]}}"}'
--data '{"body":"{\"conversationId\": \"12345Test\", \"messages\": [{\"role\": \"USER\", \"content\": \"hi\"}], \"user\": {\"type\": \"LEARNER\"}}"}'
```

### Call Docker Container

#### A. Call Docker with Python Requests

In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running Docker container. It reads input files matching the expected schema, so you can use it to validate your chatbot end-to-end.

```bash
python tests/manual_agent_requests.py
```

### Call Docker Container From Postman
#### B. Call Docker Container From Postman

POST URL:

```bash
http://localhost:8080/2015-03-31/functions/function/invocations
```

Body:
Body (stringified within `body` for the API request):

```JSON
{"body":"{\"message\": \"hi\", \"params\": {\"conversation_id\": \"12345Test\", \"conversation_history\": [{\"type\": \"user\", \"content\": \"hi\"}]}}"}
{"body":"{\"conversationId\": \"12345Test\", \"messages\": [{\"role\": \"USER\", \"content\": \"hi\"}], \"user\": {\"type\": \"LEARNER\"}}"}
```

Body with optional Params:
```JSON
Input Body with optional fields:
```json
{
"message":"hi",
"params":{
"conversation_id":"12345Test",
"conversation_history":[{"type":"user","content":"hi"}],
"summary":" ",
"conversational_style":" ",
"question_response_details": "",
"include_test_data": true,
"agent_type": {agent_name}
"conversationId": "<uuid>",
"messages": [
{ "role": "USER", "content": "<previous user message>" },
{ "role": "ASSISTANT", "content": "<previous assistant reply>" },
{ "role": "USER", "content": "<current message>" }
],
"user": {
"type": "LEARNER",
"preference": {
"conversationalStyle": "<stored style string>"
},
"taskProgress": {
"timeSpentOnQuestion": "30 minutes",
"accessStatus": "a good amount of time spent on this question today.",
"markedDone": "This question is still being worked on.",
"currentPart": {
"position": 0,
"timeSpentOnPart": "10 minutes",
"markedDone": "This part is not marked done.",
"responseAreas": [
{
"responseType": "EXPRESSION",
"totalSubmissions": 3,
"wrongSubmissions": 2,
"latestSubmission": {
"submission": "<student's last answer>",
"feedback": "<feedback text from evaluator>",
"answer": "<reference answer used for evaluation>"
}
}
]
}
}
},
"context": {
"summary": "<compressed chat history>",
"set": {
"title": "Fundamentals",
"number": 2,
"description": "<set description>"
},
"question": {
"title": "Understanding Polymorphism",
"number": 3,
"guidance": "<teacher guidance>",
"content": "<master question content>",
"estimatedTime": "15-25 minutes",
"parts": [
{
"position": 0,
"content": "<part prompt>",
"answerContent": "<part answer>",
"workedSolutionSections": [
{ "position": 0, "title": "Step 1", "content": "..." }
],
"structuredTutorialSections": [
{ "position": 0, "title": "Hint", "content": "..." }
],
"responseAreas": [
{
"position": 0,
"responseType": "EXPRESSION",
"answer": "<reference answer>",
"preResponseText": "<label shown before input>"
}
]
}
]
}
}
}
```

Output Response:

```json
{
"output": {
"role": "ASSISTANT",
"content": "<assistant reply text>"
},
"metadata": {
"summary": "<updated chat summary>",
"conversationalStyle": "<updated style string>",
"processingTimeMs": 1234
}
}
```
115 changes: 89 additions & 26 deletions docs/advanced/chat_functions/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,31 +23,96 @@ Chat functions host a chatbot. Chatbots capture and automate the process of assi

- the chat function expects the following arguments when it being called:

Body with necessary Params:
Body with necessary fields:

```JSON
{
"message":"hi",
"params":{
"conversation_id":"12345Test",
"conversation_history": [{"type":"user","content":"hi"}]
"conversationId": "12345Test",
"messages": [{ "role": "USER", "content": "hi" }],
"user": { "type": "LEARNER" }
}
```

Body with optional fields:

```JSON
{
"conversationId": "12345Test",
"messages": [
{ "role": "USER", "content": "<previous user message>" },
{ "role": "ASSISTANT", "content": "<previous assistant reply>" },
{ "role": "USER", "content": "hi" }
],
"user": {
"type": "LEARNER",
"preference": { "conversationalStyle": "<stored style string>" },
"taskProgress": {
"timeSpentOnQuestion": "30 minutes",
"accessStatus": "a good amount of time spent on this question today.",
"markedDone": "This question is still being worked on.",
"currentPart": {
"position": 0,
"timeSpentOnPart": "10 minutes",
"markedDone": "This part is not marked done.",
"responseAreas": [
{
"responseType": "EXPRESSION",
"totalSubmissions": 3,
"wrongSubmissions": 2,
"latestSubmission": {
"submission": "<student's last answer>",
"feedback": "<feedback text from evaluator>",
"answer": "<reference answer used for evaluation>"
}
}
]
}
}
},
"context": {
"summary": "<compressed chat history>",
"set": { "title": "Fundamentals", "number": 2, "description": "<set description>" },
"question": {
"title": "Understanding Polymorphism",
"number": 3,
"guidance": "<teacher guidance>",
"content": "<master question content>",
"estimatedTime": "15-25 minutes",
"parts": [
{
"position": 0,
"content": "<part prompt>",
"answerContent": "<part answer>",
"workedSolutionSections": [
{ "position": 0, "title": "Step 1", "content": "..." }
],
"structuredTutorialSections": [
{ "position": 0, "title": "Hint", "content": "..." }
],
"responseAreas": [
{
"position": 0,
"responseType": "EXPRESSION",
"answer": "<reference answer>",
"preResponseText": "<label shown before input>"
}
]
}
]
}
}
}
```

Body with optional Params:
Expected response:

```JSON
{
"message":"hi",
"params":{
"conversation_id":"12345Test",
"conversation_history":[{"type":"user","content":"hi"}],
"summary":" ",
"conversational_style":" ",
"question_response_details": "",
"include_test_data": true,
"agent_type": {agent_name}
"output": { "role": "ASSISTANT", "content": "<assistant reply text>" },
"metadata": {
"summary": "<updated chat summary>",
"conversationalStyle": "<updated style string>",
"processingTimeMs": 1234
}
}
```
Expand All @@ -62,7 +127,7 @@ Chat functions host a chatbot. Chatbots capture and automate the process of assi

4. Changes can be tested locally by running the pipeline tests using:
```bash
pytest src/module_test.py
pytest
```
[Running and Testing Chat Functions Locally](local.md){ .md-button }

Expand All @@ -78,16 +143,14 @@ Chat functions host a chatbot. Chatbots capture and automate the process of assi
curl --location 'https://<***>.execute-api.eu-west-2.amazonaws.com/default/chat/chatFunctionBoilerplate-dev' \
--header 'Content-Type: application/json' \
--data '{
"message": "hi",
"params": {
"conversation_id": "12345Test",
"conversation_history": [
{
"type": "user",
"content": "hi"
}
]
}
"conversationId": "12345Test",
"messages": [
{
"role": "USER",
"content": "hi"
}
],
"user": { "type": "LEARNER" }
}'

7. Once the `dev` chat function is fully tested, you can merge the code to the default branch (`main`). This will trigger the `main.yml` workflow, which will deploy the `staging` and `prod` versions of your chat function. Please contact the ADMIN to provide you the URLS for the `staging` and `prod` versions of your chat function.
Expand Down
Loading
Loading