feat: BMDB integration, system prompt split, LLM speed-ups, dual-DB UI#66
Draft
feat: BMDB integration, system prompt split, LLM speed-ups, dual-DB UI#66
Conversation
… for each request
…o backend llm functions
…input and add another button 'provide history of model changes'
se, format the output results from the query
…ntroduce and test handlesendmessage2 for sending queries to BMDB
…d another one to /search/id
…ic questions, add the tool to llm processing
…s; add a stop button
…bmdb specific actions
… up llm response time
…ting tools into subsets and choosing which tools to send to llm based on user prompt; decreasing max result return
… lists (>10 models)
…based on its type
…anging format, this way the llm will stop returning false results
… answers are registered on the chatpage
…k to generate the final response
…d queried even on the biomodel-id-specific page instead of only one
…B for consistency
…n the biomodel specific page (AI Analysis) on the chat screen + loads the history in the sidebar
…functions; start adding the endpoints to the router file
…out BMDB model to “ask about this specific model”
…iles and for getting information about a specific model
…to identifiers.org should be underlined with a link available, and no other elements should have links
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Brings 76 commits from
developmentintomain. Major themes:bmdb_router/bmdb_controller/bmdb_schema, new service functions (fetch_bmdb_models,get_xml_file,get_bmdb_model_info), new BMDB tools wired into the LLM, and a parallel BMDB search path on the frontend.system_prompt.pycarved into a baseSYSTEM_PROMPT+ per-DBBMDB_SYSTEM_PROMPT/VCDB_SYSTEM_PROMPT, composed at runtime based on the selected database.should_use_tools()skips tool round-trips for chitchat;select_tools_for_prompt()filters tools intoDB_TOOLS / KB_TOOLS / PUB_TOOLSsubsets via regex on the user prompt;asyncio.gatherruns tool calls concurrently;summarize_tool_result()truncates large tool outputs;default_rowslowered 1000 → 25. Per-stage timing surfaced to the UI astool_summary.ChatBoxgainsuseVCDB/useBMDBcheckboxes, a Stop button (AbortController), conditional quick-action button groups, and BMDB-formatted result rendering.localStorage-backed conversations with deep-linking via?conversation=<uuid>, real entries in the sidebar.app.services.vcelldb_service→app.services.databases_service(all in-tree imports updated)./search/[bmid], settings page link updates, Pydantic v2SettingsConfigDict(extra="ignore").Opened as draft — the following surfaced during review:
Blocking
print()/console.logleft in —llms_service.py(incl. one that dumps the fullmessagespayload),databases_service.py(top-level print on import + several CHECK/DEBUG/RAW JSON prints),tools_utils.py,vcelldb_controller.py,llms_router.py, andChatBox.tsx(PPPPPP,RRRRRR,AAAAAA,bmkeysx2).bmkeys = []reset inside the per-tool-call loop inllms_service.py— only the last tool call's keys survive. Move the initialization above the loop.direct_text = response.choices[0].message or ""should be.message.content or ""inllms_service.py.databases_service.get_xml_filecallscheck_vcell_connectivity()(DNS-checksvcell.cam.uchc.edu) before hitting biomodels.org.https://biomodels.org/; frontend /docker-compose.ymlusehttps://www.biomodels.org/. Pick one and centralize.from multiprocessing import processinllms_router.py:1(likely IDE auto-import);import Suspense from \"react\"inanalyze/[id]/page.tsx:4is a default-import typo (should be named) and unused.ßcharacter in a comment inllms_service.py("simple, conversational promptsß").Should fix
default_rows=25insideexecute_toolregardless of what the model requests; the tool schema still advertisesmaximum: 50. Either raise the cap or update the schema.CategoryEnum/OrderByEnuminbmdb_schema.py; emptysanitize_xml_contentstub indatabases_service.py; commented payload/userMessage blocks duplicated acrosshandleSendMessageandhandleSendMessageBMDBinChatBox.tsx.handleSendMessageandhandleSendMessageBMDBinChatBox.tsxare ~140-line copies; parameterize over the database key.test_vcelldb_service.pyonly got its import path updated for the rename. No tests cover the newfetch_bmdb_models/get_xml_file/get_bmdb_model_info, nor the load-bearingshould_use_tools/select_tools_for_promptregex routing on which the speed claims rest.BMDB_SYSTEM_PROMPTis missing the publications guidance the old monolithic prompt had — BMDB-mode publication questions will degrade.Migration / deployer notes
app.services.vcelldb_serviceis renamed toapp.services.databases_service. Any out-of-tree importer (e.g.populate_db.ipynb, CI scripts) must be updated.NEXT_PUBLIC_API_URL_BMDBis consumed infrontend/app/search/page.tsxandfrontend/app/search/[bmid]/page.tsx. Confirm it lands infrontend/.env.example.Settingsswitched to Pydantic v2SettingsConfigDict(extra=\"ignore\")— masks future env var typos silently.get_llm_responsenow returns a 3-tuple(result, bmkeys, tool_summary); affected endpoints' JSON gains atool_summaryfield.Test plan
🤖 Generated with Claude Code