From f5b82d94a68fe29bb183c10a17f95df044dfafb2 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 02:23:43 -0700 Subject: [PATCH 01/15] =?UTF-8?q?feat(cli):=20sidecar=20UX=20=E2=80=94=20w?= =?UTF-8?q?izard,=20doctor,=20agents,=20completions,=20fixes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bare-agent shortcuts (claude/codex/cursor/hermes), first-run setup wizard with config pre-fill, doctor / agents / completions subcommands, and `completions --install` for cargo-install users. Codex e2e fixes: strip ChatGPT JWT, correct pass-through args. NMF-86 missing-OPENAI_API_KEY warning surfaced eagerly. Friendlier copy on port already in use. Signed-off-by: Ajay Thorve --- ATTRIBUTIONS-Rust.md | 1413 ++++++++++++++++- Cargo.lock | 100 +- crates/cli/Cargo.toml | 2 + crates/cli/src/completions_install.rs | 116 ++ crates/cli/src/config.rs | 262 ++- crates/cli/src/doctor.rs | 554 +++++++ crates/cli/src/gateway.rs | 109 +- crates/cli/src/launcher.rs | 168 +- crates/cli/src/main.rs | 70 +- crates/cli/src/server.rs | 18 +- crates/cli/src/setup.rs | 727 +++++++++ crates/cli/tests/cli_tests.rs | 58 +- .../coverage/completions_install_tests.rs | 78 + crates/cli/tests/coverage/config_tests.rs | 46 +- crates/cli/tests/coverage/doctor_tests.rs | 170 ++ crates/cli/tests/coverage/gateway_tests.rs | 123 ++ crates/cli/tests/coverage/installer_tests.rs | 4 +- crates/cli/tests/coverage/launcher_tests.rs | 73 +- crates/cli/tests/coverage/setup_tests.rs | 247 +++ .../coding-agent-claude-code.md | 14 +- .../coding-agent-codex.md | 8 +- .../coding-agent-cursor.md | 6 +- .../coding-agent-gateway.md | 18 +- .../coding-agent-hermes.md | 6 +- integrations/coding-agents/README.md | 12 +- .../coding-agents/claude-code/README.md | 14 +- .../claude-code/hooks/hooks.json | 26 +- integrations/coding-agents/codex/README.md | 6 +- integrations/coding-agents/cursor/README.md | 6 +- 29 files changed, 4214 insertions(+), 240 deletions(-) create mode 100644 crates/cli/src/completions_install.rs create mode 100644 crates/cli/src/doctor.rs create mode 100644 crates/cli/src/setup.rs create mode 100644 crates/cli/tests/coverage/completions_install_tests.rs create mode 100644 crates/cli/tests/coverage/doctor_tests.rs create mode 100644 crates/cli/tests/coverage/setup_tests.rs diff --git a/ATTRIBUTIONS-Rust.md b/ATTRIBUTIONS-Rust.md index eb7bb637..589cce93 100644 --- a/ATTRIBUTIONS-Rust.md +++ b/ATTRIBUTIONS-Rust.md @@ -5246,6 +5246,216 @@ limitations under the License. limitations under the License. +``` + +## clap_complete - 4.6.5 +**Repository URL**: https://github.com/clap-rs/clap +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + ``` ## clap_derive - 4.6.0 @@ -6115,6 +6325,36 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. +``` + +## console - 0.15.11 +**Repository URL**: https://github.com/console-rs/console +**License Type(s)**: MIT +### License: https://spdx.org/licenses/MIT.html +``` +The MIT License (MIT) + +Copyright (c) 2017 Armin Ronacher + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + ``` ## const-oid - 0.10.2 @@ -7607,6 +7847,36 @@ Apache License See the License for the specific language governing permissions and limitations under the License. +``` + +## dialoguer - 0.11.0 +**Repository URL**: https://github.com/console-rs/dialoguer +**License Type(s)**: MIT +### License: https://spdx.org/licenses/MIT.html +``` +The MIT License (MIT) + +Copyright (c) 2017 Armin Ronacher + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + ``` ## digest - 0.11.2 @@ -8226,13 +8496,223 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +``` + +## encode_unicode - 1.0.0 +**Repository URL**: https://github.com/tormol/encode_unicode +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. ``` @@ -27239,6 +27719,215 @@ limitations under the License. ``` +## shell-words - 1.1.1 +**Repository URL**: https://github.com/tmiasko/shell-words +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +``` + ## shlex - 1.3.0 **Repository URL**: https://github.com/comex/rust-shlex **License Type(s)**: Apache-2.0 @@ -28890,51 +29579,213 @@ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. -7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +``` + +## thiserror - 1.0.69 +**Repository URL**: https://github.com/dtolnay/thiserror +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. + +"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: + + (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. + + You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + +To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +``` + +## thiserror - 2.0.18 +**Repository URL**: https://github.com/dtolnay/thiserror +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. + +"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: + + (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. + + You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. -8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. +8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. -9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. +9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. +To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] @@ -28942,7 +29793,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, @@ -28952,7 +29803,7 @@ limitations under the License. ``` -## thiserror - 2.0.18 +## thiserror-impl - 1.0.69 **Repository URL**: https://github.com/dtolnay/thiserror **License Type(s)**: Apache-2.0 ### License: https://spdx.org/licenses/Apache-2.0.html @@ -31581,38 +32432,247 @@ SOFTWARE, YOU UNEQUIVOCALLY ACCEPT, AND AGREE TO BE BOUND BY, ALL OF THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE, DO NOT DOWNLOAD, INSTALL, COPY, DISTRIBUTE OR USE THE DATA FILES OR SOFTWARE. -Permission is hereby granted, free of charge, to any person obtaining a -copy of data files and any associated documentation (the "Data Files") or -software and any associated documentation (the "Software") to deal in the -Data Files or Software without restriction, including without limitation -the rights to use, copy, modify, merge, publish, distribute, and/or sell -copies of the Data Files or Software, and to permit persons to whom the -Data Files or Software are furnished to do so, provided that either (a) -this copyright and permission notice appear with all copies of the Data -Files or Software, or (b) this copyright and permission notice appear in -associated Documentation. +Permission is hereby granted, free of charge, to any person obtaining a +copy of data files and any associated documentation (the "Data Files") or +software and any associated documentation (the "Software") to deal in the +Data Files or Software without restriction, including without limitation +the rights to use, copy, modify, merge, publish, distribute, and/or sell +copies of the Data Files or Software, and to permit persons to whom the +Data Files or Software are furnished to do so, provided that either (a) +this copyright and permission notice appear with all copies of the Data +Files or Software, or (b) this copyright and permission notice appear in +associated Documentation. + +THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY +KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF +THIRD PARTY RIGHTS. + +IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE +BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, +OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, +WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, +ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THE DATA +FILES OR SOFTWARE. + +Except as contained in this notice, the name of a copyright holder shall +not be used in advertising or otherwise to promote the sale, use or other +dealings in these Data Files or Software without prior written +authorization of the copyright holder. + +``` + +## unicode-segmentation - 1.13.2 +**Repository URL**: https://github.com/unicode-rs/unicode-segmentation +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. -THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY -KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF -THIRD PARTY RIGHTS. + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. -IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE -BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, -OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, -WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, -ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THE DATA -FILES OR SOFTWARE. +Copyright [yyyy] [name of copyright owner] -Except as contained in this notice, the name of a copyright holder shall -not be used in advertising or otherwise to promote the sale, use or other -dealings in these Data Files or Software without prior written -authorization of the copyright holder. +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. ``` -## unicode-segmentation - 1.13.2 -**Repository URL**: https://github.com/unicode-rs/unicode-segmentation +## unicode-width - 0.2.2 +**Repository URL**: https://github.com/unicode-rs/unicode-width **License Type(s)**: Apache-2.0 ### License: https://spdx.org/licenses/Apache-2.0.html ``` @@ -36382,6 +37442,215 @@ THE SOFTWARE. ``` +## windows-sys - 0.59.0 +**Repository URL**: https://github.com/microsoft/windows-rs +**License Type(s)**: Apache-2.0 +### License: https://spdx.org/licenses/Apache-2.0.html +``` + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright (c) Microsoft Corporation. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +``` + ## windows-sys - 0.61.2 **Repository URL**: https://github.com/microsoft/windows-rs **License Type(s)**: Apache-2.0 diff --git a/Cargo.lock b/Cargo.lock index 8843e51a..fdae05dc 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -327,6 +327,15 @@ dependencies = [ "strsim", ] +[[package]] +name = "clap_complete" +version = "4.6.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e0a7a9bfdb35811f9e59832f0f05975114d2251b415fb534108e6f34060fd772" +dependencies = [ + "clap", +] + [[package]] name = "clap_derive" version = "4.6.0" @@ -374,6 +383,19 @@ dependencies = [ "crossbeam-utils", ] +[[package]] +name = "console" +version = "0.15.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "054ccb5b10f9f2cbf51eb355ca1d05c2d279ce1804688d0db74b4733a5aeafd8" +dependencies = [ + "encode_unicode", + "libc", + "once_cell", + "unicode-width", + "windows-sys 0.59.0", +] + [[package]] name = "const-oid" version = "0.10.2" @@ -439,6 +461,17 @@ dependencies = [ "syn", ] +[[package]] +name = "dialoguer" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "658bce805d770f407bc62102fca7c2c64ceef2fbcb2b8bd19d2765ce093980de" +dependencies = [ + "console", + "shell-words", + "thiserror 1.0.69", +] + [[package]] name = "digest" version = "0.11.2" @@ -467,6 +500,12 @@ version = "1.15.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719" +[[package]] +name = "encode_unicode" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "34aa73646ffb006b8f5147f3dc182bd4bcb190227ce861fc4a4844bf8e3cb2c0" + [[package]] name = "equivalent" version = "1.0.2" @@ -1162,7 +1201,7 @@ dependencies = [ "opentelemetry_sdk", "serde", "serde_json", - "thiserror", + "thiserror 2.0.18", "tokio", "tokio-stream", "tonic", @@ -1186,7 +1225,7 @@ dependencies = [ "serde_json_canonicalizer", "sha2", "tdigest", - "thiserror", + "thiserror 2.0.18", "tokio", "tokio-stream", "uuid", @@ -1200,6 +1239,8 @@ dependencies = [ "axum", "bytes", "clap", + "clap_complete", + "dialoguer", "futures-util", "http", "http-body-util", @@ -1209,7 +1250,7 @@ dependencies = [ "serde_json", "serde_yaml", "tempfile", - "thiserror", + "thiserror 2.0.18", "tokio", "toml", "toml_edit", @@ -1367,7 +1408,7 @@ dependencies = [ "futures-sink", "js-sys", "pin-project-lite", - "thiserror", + "thiserror 2.0.18", "tracing", ] @@ -1397,7 +1438,7 @@ dependencies = [ "opentelemetry_sdk", "prost", "reqwest", - "thiserror", + "thiserror 2.0.18", "tokio", "tonic", ] @@ -1427,7 +1468,7 @@ dependencies = [ "opentelemetry", "percent-encoding", "rand", - "thiserror", + "thiserror 2.0.18", "tokio", "tokio-stream", ] @@ -1655,7 +1696,7 @@ dependencies = [ "rustc-hash", "rustls", "socket2", - "thiserror", + "thiserror 2.0.18", "tokio", "tracing", "web-time", @@ -1676,7 +1717,7 @@ dependencies = [ "rustls", "rustls-pki-types", "slab", - "thiserror", + "thiserror 2.0.18", "tinyvec", "tracing", "web-time", @@ -2135,6 +2176,12 @@ dependencies = [ "digest", ] +[[package]] +name = "shell-words" +version = "1.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dc6fe69c597f9c37bfeeeeeb33da3530379845f10be461a66d16d03eca2ded77" + [[package]] name = "shlex" version = "1.3.0" @@ -2251,13 +2298,33 @@ dependencies = [ "windows-sys 0.61.2", ] +[[package]] +name = "thiserror" +version = "1.0.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52" +dependencies = [ + "thiserror-impl 1.0.69", +] + [[package]] name = "thiserror" version = "2.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4288b5bcbc7920c07a1149a35cf9590a2aa808e0bc1eafaade0b80947865fbc4" dependencies = [ - "thiserror-impl", + "thiserror-impl 2.0.18", +] + +[[package]] +name = "thiserror-impl" +version = "1.0.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1" +dependencies = [ + "proc-macro2", + "quote", + "syn", ] [[package]] @@ -2573,6 +2640,12 @@ version = "1.13.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9629274872b2bfaf8d66f5f15725007f635594914870f65218920345aa11aa8c" +[[package]] +name = "unicode-width" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b4ac048d71ede7ee76d585517add45da530660ef4390e49b098733c6e897f254" + [[package]] name = "unsafe-libyaml" version = "0.2.11" @@ -2859,6 +2932,15 @@ dependencies = [ "windows-targets 0.52.6", ] +[[package]] +name = "windows-sys" +version = "0.59.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b" +dependencies = [ + "windows-targets 0.52.6", +] + [[package]] name = "windows-sys" version = "0.60.2" diff --git a/crates/cli/Cargo.toml b/crates/cli/Cargo.toml index 5c2cf3db..11e3edb5 100644 --- a/crates/cli/Cargo.toml +++ b/crates/cli/Cargo.toml @@ -22,9 +22,11 @@ async-stream = "0.3" axum = "0.8" bytes = "1" clap = { version = "4", features = ["derive", "env"] } +clap_complete = "4" futures-util = "0.3" http = "1" http-body-util = "0.1" +dialoguer = { version = "0.11", default-features = false } reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls-native-roots", "stream"] } serde = { version = "1", features = ["derive"] } serde_json = "1" diff --git a/crates/cli/src/completions_install.rs b/crates/cli/src/completions_install.rs new file mode 100644 index 00000000..e11b2bb2 --- /dev/null +++ b/crates/cli/src/completions_install.rs @@ -0,0 +1,116 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! `nemo-flow completions install` — write a shell completion script to the standard fpath / +//! completions directory for the user's current `$SHELL`. Mirrors the file layout used by +//! `scripts/install.sh` so curl-pipe installs and `cargo install` installs land in the same +//! place. + +use std::ffi::OsString; +use std::io::Write; +use std::path::{Path, PathBuf}; + +use clap_complete::Shell; + +use crate::config::Cli; +use crate::error::CliError; + +/// Generates the completion script for `$SHELL` and writes it to the matching shell-specific +/// path under `$HOME`. Returns the path written so the CLI can echo it back to the user. The +/// shell argument is optional; when omitted, the function infers the shell from the `$SHELL` +/// environment variable. Unsupported or undetectable shells produce a `Config` error so the +/// caller can surface a clear message instead of writing to an unrelated path. +pub(crate) fn install(shell: Option) -> Result { + let shell = match shell { + Some(shell) => shell, + None => detect_shell(std::env::var_os("SHELL"))?, + }; + let target = completion_path(shell, std::env::var_os("HOME"), std::env::var_os("ZDOTDIR"))?; + if let Some(parent) = target.parent() { + std::fs::create_dir_all(parent)?; + } + let mut clap_command = ::command(); + let mut buffer = Vec::new(); + clap_complete::generate(shell, &mut clap_command, "nemo-flow", &mut buffer); + write_atomic(&target, &buffer)?; + Ok(target) +} + +/// Returns the file path where the completion script for `shell` is installed. Pure function so +/// tests can exercise path selection without touching the filesystem; the live call passes the +/// process environment in. +fn completion_path( + shell: Shell, + home: Option, + zdotdir: Option, +) -> Result { + match shell { + Shell::Zsh => { + let base = zdotdir.or(home).ok_or_else(|| { + CliError::Config("cannot resolve $ZDOTDIR or $HOME for zsh completion".into()) + })?; + Ok(PathBuf::from(base).join(".zfunc/_nemo-flow")) + } + Shell::Bash => { + let home = home.ok_or_else(|| { + CliError::Config("cannot resolve $HOME for bash completion".into()) + })?; + Ok(PathBuf::from(home).join(".bash_completion.d/nemo-flow")) + } + Shell::Fish => { + let home = home.ok_or_else(|| { + CliError::Config("cannot resolve $HOME for fish completion".into()) + })?; + Ok(PathBuf::from(home).join(".config/fish/completions/nemo-flow.fish")) + } + other => Err(CliError::Config(format!( + "`nemo-flow completions install` does not support {other} — \ + run `nemo-flow completions {other}` and redirect manually" + ))), + } +} + +/// Infers a `clap_complete::Shell` from `$SHELL`. Looks only at the basename of the path and +/// matches the four shells the install path supports. Anything else produces a `Config` error +/// pointing the user at the explicit-shell form. +fn detect_shell(shell_env: Option) -> Result { + let raw = shell_env.ok_or_else(|| { + CliError::Config( + "$SHELL is not set; pass an explicit shell, e.g. `nemo-flow completions install zsh`" + .into(), + ) + })?; + let name = Path::new(&raw) + .file_name() + .and_then(|value| value.to_str()) + .unwrap_or_default(); + match name { + "zsh" => Ok(Shell::Zsh), + "bash" => Ok(Shell::Bash), + "fish" => Ok(Shell::Fish), + _ => Err(CliError::Config(format!( + "unsupported $SHELL `{name}` — \ + run `nemo-flow completions ` and redirect manually" + ))), + } +} + +// Writes `bytes` to `target` via a same-directory temp file + rename so a half-finished install +// never leaves the user with a partially-written completion script. +fn write_atomic(target: &Path, bytes: &[u8]) -> Result<(), CliError> { + let parent = target.parent().unwrap_or_else(|| Path::new(".")); + let file_name = target + .file_name() + .and_then(|value| value.to_str()) + .unwrap_or("nemo-flow"); + let temp = parent.join(format!(".{file_name}.tmp")); + let mut handle = std::fs::File::create(&temp)?; + handle.write_all(bytes)?; + handle.sync_all()?; + std::fs::rename(&temp, target)?; + Ok(()) +} + +#[cfg(test)] +#[path = "../tests/coverage/completions_install_tests.rs"] +mod tests; diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index e035dac5..6a9bf7ec 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -23,27 +23,163 @@ pub(crate) struct Cli { #[derive(Debug, Clone, Subcommand)] pub(crate) enum Command { + /// Run Claude Code with observability (setup on first use) + #[command( + long_about = "Run Anthropic's `claude` CLI under an ephemeral NeMo Flow gateway. \ + Observability (ATIF + OpenInference) is wired in transparently via \ + ANTHROPIC_BASE_URL. First-time use launches the setup wizard so the \ + `[agents.claude]` block lands in `.nemo-flow/config.toml` and observation \ + starts on the next invocation without prompts.", + after_help = "Examples:\n \ + nemo-flow claude\n \ + nemo-flow claude -- chat \"refactor the launcher\"\n \ + nemo-flow claude -- --resume " + )] + Claude(EasyPathCommand), + /// Run Codex with observability (setup on first use) + #[command( + long_about = "Run OpenAI's `codex` CLI under an ephemeral NeMo Flow gateway. NeMo Flow \ + injects a `nemo-flow-openai` provider override so codex points at the \ + gateway; the gateway then forwards to `--openai-base-url` (defaults to \ + api.openai.com) with `OPENAI_API_KEY` injected on the codex route (see \ + NMF-86 — codex's own auth.json JWT is stripped). Requires codex-cli >= \ + 0.129.0.", + after_help = "Examples:\n \ + nemo-flow codex\n \ + nemo-flow codex -- exec \"fix the bug in foo.rs\"\n \ + nemo-flow --openai-base-url https://inference-api.nvidia.com codex" + )] + Codex(EasyPathCommand), + /// Run Cursor with observability (setup on first use) + #[command( + long_about = "Run Cursor's `cursor-agent` CLI under an ephemeral NeMo Flow gateway. The \ + launcher temporarily patches `.cursor/hooks.json` in the project root \ + during the run and restores it on exit. Disable that via \ + `[agents.cursor] patch_restore_hooks = false` in config.toml if you \ + maintain `.cursor/hooks.json` yourself.", + after_help = "Examples:\n \ + nemo-flow cursor\n \ + nemo-flow cursor -- agent --resume " + )] + Cursor(EasyPathCommand), + /// Run Hermes with observability (setup on first use) + #[command( + long_about = "Run NVIDIA's Hermes agent under a NeMo Flow gateway. Unlike the other \ + agents, Hermes is typically run with persistent shell hooks (install via \ + `nemo-flow install hermes`) and a long-running gateway daemon on a fixed \ + port. The Hermes config (`~/.hermes/config.yaml`) must point its \ + `model.base_url` at that daemon.", + after_help = "Examples:\n \ + nemo-flow hermes\n \ + nemo-flow hermes -- chat --provider custom" + )] + Hermes(EasyPathCommand), + /// Run the interactive setup (writes `.nemo-flow/config.toml`) + Config(ConfigCommand), + /// Diagnose env, agents, config, observability (use `--json` for machine output) + Doctor(DoctorCommand), + /// List supported and locally-detected agents (use `--json` for machine output) + Agents(AgentsCommand), + /// Print shell completion script (e.g. `nemo-flow completions zsh > ~/.zfunc/_nemo-flow`) + Completions(CompletionsCommand), + /// Run an agent deterministically (no wizard; errors if config is missing) + Run(RunCommand), + /// Install persistent hooks into an agent's own config directory (advanced) Install(InstallCommand), + /// Internal: subprocess used by installed hooks to forward events. Not typed by humans. + #[command(hide = true)] HookForward(HookForwardCommand), - Run(RunCommand), +} + +/// Args for `nemo-flow doctor`. `--json` is on this command (rather than as a global flag) +/// so it doesn't pollute the help output of subcommands where it has no meaning. +#[derive(Debug, Clone, Args)] +pub(crate) struct DoctorCommand { + /// Emit machine-readable JSON instead of the formatted human report. Versioned via + /// `schema_version`; stable shape for CI / evaluation harness consumption. + #[arg(long)] + pub(crate) json: bool, +} + +/// Args for `nemo-flow agents`. Shares the `--json` shape with `nemo-flow doctor`'s +/// `agents` field so the two outputs can be unified by downstream consumers. +#[derive(Debug, Clone, Args)] +pub(crate) struct AgentsCommand { + /// Emit the supported + detected agent list as JSON instead of formatted text. + #[arg(long)] + pub(crate) json: bool, +} + +/// Args for `nemo-flow completions ` (print to stdout) or `nemo-flow completions --install` +/// (auto-detect $SHELL and write to the standard fpath / completions directory). +/// +/// The Homebrew / curl-install flows drop completion scripts automatically; this subcommand is +/// the escape hatch for CI, custom shells, regeneration, and `cargo install` users where no +/// post-install hook runs. +#[derive(Debug, Clone, Args)] +pub(crate) struct CompletionsCommand { + /// Shell to generate the completion script for. Optional when used with `--install` (the + /// installer auto-detects `$SHELL`). + #[arg(value_enum)] + pub(crate) shell: Option, + /// Write the completion script into the shell's standard completions directory instead of + /// printing to stdout. Auto-detects `$SHELL` when no shell argument is given. + #[arg(long)] + pub(crate) install: bool, +} + +/// Args for `nemo-flow config`. The setup wizard runs by default; `--reset` short-circuits to +/// a destructive clear. An optional positional agent name scopes both the wizard and `--reset` +/// to a single agent's settings, leaving other agents' blocks untouched. +#[derive(Debug, Clone, Args)] +pub(crate) struct ConfigCommand { + /// Scope this run to one agent. Wizard skips the agent multi-select; `--reset` removes + /// only that agent's block from the existing config file. Omit to operate on all agents. + #[arg(value_enum)] + pub(crate) agent: Option, + /// Delete the project config file (or remove just the scoped agent's block when an agent + /// is named). The wizard does NOT run after a reset — invoke `nemo-flow config` again to + /// re-create the file from scratch. + #[arg(long)] + pub(crate) reset: bool, } #[derive(Debug, Clone, Default, Args)] pub(crate) struct ServerArgs { + /// Path to an explicit config file (disables auto-discovery of workspace/global/system) #[arg(long)] pub(crate) config: Option, + /// Address for the gateway to listen on in daemon mode (default 127.0.0.1:4040) #[arg(long, env = "NEMO_FLOW_GATEWAY_BIND")] pub(crate) bind: Option, + /// Upstream OpenAI-compatible base URL (e.g. https://api.openai.com, NVIDIA inference) #[arg(long, env = "NEMO_FLOW_OPENAI_BASE_URL")] pub(crate) openai_base_url: Option, + /// Upstream Anthropic base URL (e.g. https://api.anthropic.com) #[arg(long, env = "NEMO_FLOW_ANTHROPIC_BASE_URL")] pub(crate) anthropic_base_url: Option, + /// Directory to write ATIF trajectory JSON files into per session #[arg(long, env = "NEMO_FLOW_ATIF_DIR")] pub(crate) atif_dir: Option, + /// OpenInference-compatible OTLP HTTP endpoint for streaming spans (Phoenix, Arize, etc.) #[arg(long, env = "NEMO_FLOW_OPENINFERENCE_ENDPOINT")] pub(crate) openinference_endpoint: Option, } +impl ServerArgs { + /// True when the user passed any daemon-specific server flag on the CLI. Used by the bare + /// `nemo-flow` dispatch to choose between "user wants the gateway daemon" (any daemon flag + /// present) and "user just typed the bare command" (start the setup wizard). `--config` is + /// excluded — it's relevant to every subcommand, not a daemon-mode signal. + pub(crate) fn requested_daemon_mode(&self) -> bool { + self.bind.is_some() + || self.openai_base_url.is_some() + || self.anthropic_base_url.is_some() + || self.atif_dir.is_some() + || self.openinference_endpoint.is_some() + } +} + #[derive(Debug, Clone)] pub(crate) struct GatewayConfig { pub(crate) bind: SocketAddr, @@ -109,6 +245,19 @@ pub(crate) struct HookForwardCommand { pub(crate) fail_closed: bool, } +/// Args for the easy-path agent shortcut (`nemo-flow claude`, `nemo-flow codex`, etc.). +/// Holds only pass-through agent args; the agent itself is selected by which subcommand variant +/// is invoked, and all observability/upstream settings come from the resolved config file. If no +/// config file is present, the dispatcher fires setup (Phase 3). Phase 2 errors with a +/// pointer to `nemo-flow config` since setup isn't wired up yet. +#[derive(Debug, Clone, Args)] +pub(crate) struct EasyPathCommand { + /// Pass-through args forwarded to the underlying agent process. Use `--` to separate them + /// from `nemo-flow`'s own flags. See the `Examples` section below for agent-specific shapes. + #[arg(last = true)] + pub(crate) command: Vec, +} + #[derive(Debug, Clone, Args)] pub(crate) struct RunCommand { #[arg(long, value_enum)] @@ -138,6 +287,10 @@ pub(crate) struct RunCommand { #[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)] #[value(rename_all = "kebab-case")] pub(crate) enum CodingAgent { + /// Canonical CLI spelling is `claude` (matches Anthropic's own binary name and the TOML + /// `[agents.claude]` key). `claude-code` is kept as an input alias for backward compat + /// with hooks installed before this rename. + #[value(name = "claude", alias = "claude-code")] ClaudeCode, Codex, Cursor, @@ -241,27 +394,34 @@ impl Default for CursorAgentConfig { } } +// TOML file shape grouped by user intent. Sections map 1:1 onto fields already present on +// `GatewayConfig` / `AgentConfigs`; this is a rename pass — no new runtime knobs land in this +// pass. `[plugins]` is reserved as a forward-compatible block so users editing config today +// need no rewrite once the plugin runtime lands. #[derive(Debug, Clone, Default, Deserialize)] struct FileConfig { - server: Option, - session: Option, + upstream: Option, + observability: Option, export: Option, + plugins: Option, agents: Option, } #[derive(Debug, Clone, Default, Deserialize)] -struct FileServerConfig { +struct FileUpstreamConfig { openai_base_url: Option, anthropic_base_url: Option, } #[derive(Debug, Clone, Default, Deserialize)] -struct FileSessionConfig { +struct FileObservabilityConfig { atif_dir: Option, metadata: Option, - plugin_config: Option, } +// `[export.]` stays nested so future per-backend config (headers, timeout, protocol) +// can live alongside `endpoint` without flattening into a wall of `_*` keys at the +// observability layer. #[derive(Debug, Clone, Default, Deserialize)] struct FileExportConfig { openinference: Option, @@ -272,10 +432,19 @@ struct FileOpenInferenceConfig { endpoint: Option, } +#[derive(Debug, Clone, Default, Deserialize)] +struct FilePluginsConfig { + // Reserved for the plugin runtime. Stored on `GatewayConfig.plugin_config` for now; + // nothing in-process consumes it until the plugin runtime lands. + config: Option, +} + #[derive(Debug, Clone, Default, Deserialize)] struct FileAgentsConfig { - #[serde(rename = "claude-code")] - claude_code: Option, + // Keys match the agent's CLI invocation name (`claude`, `codex`, `cursor`, `hermes`) — the + // word the user types at the shell — not the product name ("Claude Code") or the internal + // `CodingAgent` enum kebab spelling. Same convention as the bare-agent shortcut in Phase 2. + claude: Option, codex: Option, cursor: Option, hermes: Option, @@ -431,13 +600,20 @@ fn load_shared_config(explicit: Option<&PathBuf>) -> Result bool { + config_paths(None).iter().any(|path| path.exists()) +} + // Returns the config search path. An explicit path disables implicit discovery; otherwise system // config is lowest priority, the nearest project config is next, and user config is merged last. fn config_paths(explicit: Option<&PathBuf>) -> Vec { if let Some(path) = explicit { return vec![path.clone()]; } - let mut paths = vec![PathBuf::from("/etc/nemo-flow/gateway.toml")]; + let mut paths = vec![PathBuf::from("/etc/nemo-flow/config.toml")]; if let Ok(cwd) = std::env::current_dir() && let Some(project) = find_project_config(&cwd) { @@ -453,7 +629,7 @@ fn config_paths(explicit: Option<&PathBuf>) -> Vec { // The first hit wins so nested projects can override parent workspace defaults. fn find_project_config(start: &std::path::Path) -> Option { for ancestor in start.ancestors() { - let path = ancestor.join(".nemo-flow/gateway.toml"); + let path = ancestor.join(".nemo-flow/config.toml"); if path.exists() { return Some(path); } @@ -465,9 +641,9 @@ fn find_project_config(start: &std::path::Path) -> Option { // config loading portable in minimal environments where no home directory is visible. fn user_config_path() -> Option { if let Some(base) = std::env::var_os("XDG_CONFIG_HOME") { - return Some(PathBuf::from(base).join("nemo-flow/gateway.toml")); + return Some(PathBuf::from(base).join("nemo-flow/config.toml")); } - home_dir().map(|home| home.join(".config/nemo-flow/gateway.toml")) + home_dir().map(|home| home.join(".config/nemo-flow/config.toml")) } // Applies the typed TOML config model to the resolved runtime config. Missing sections and fields @@ -477,46 +653,50 @@ fn apply_file_config(resolved: &mut ResolvedConfig, value: toml::Value) -> Resul let config: FileConfig = value.try_into().map_err(|error| { CliError::Config(format!("invalid gateway configuration shape: {error}")) })?; - apply_file_server_config(&mut resolved.gateway, config.server); - apply_file_session_config(&mut resolved.gateway, config.session); + apply_file_upstream_config(&mut resolved.gateway, config.upstream); + apply_file_observability_config(&mut resolved.gateway, config.observability); apply_file_export_config(&mut resolved.gateway, config.export); + apply_file_plugins_config(&mut resolved.gateway, config.plugins); apply_file_agents_config(&mut resolved.agents, config.agents); Ok(()) } -// Applies provider upstream defaults from file config. These values are the upstream targets used -// by direct gateway server mode; transparent `run` mode can still override them per invocation. -fn apply_file_server_config(gateway: &mut GatewayConfig, server: Option) { - let Some(server) = server else { +// Applies upstream LLM provider URLs. These are the bases for OpenAI- and Anthropic-shaped +// gateway routes; transparent `run` mode can still override them per invocation. +fn apply_file_upstream_config(gateway: &mut GatewayConfig, upstream: Option) { + let Some(upstream) = upstream else { return; }; - if let Some(value) = server.openai_base_url { + if let Some(value) = upstream.openai_base_url { gateway.openai_base_url = value; } - if let Some(value) = server.anthropic_base_url { + if let Some(value) = upstream.anthropic_base_url { gateway.anthropic_base_url = value; } } -// Applies session-level exporter and metadata defaults. Missing optional fields leave earlier -// merge layers intact, which preserves global or project defaults when user config is partial. -fn apply_file_session_config(gateway: &mut GatewayConfig, session: Option) { - let Some(session) = session else { +// Applies observability sinks: ATIF trajectory directory and session metadata tags applied to +// every span/trajectory. Missing fields preserve earlier merge layers. OpenInference endpoint +// lives under `[export.openinference]` (see `apply_file_export_config`) so per-backend config +// can grow there without restructuring this section. +fn apply_file_observability_config( + gateway: &mut GatewayConfig, + observability: Option, +) { + let Some(observability) = observability else { return; }; - if let Some(value) = session.atif_dir { + if let Some(value) = observability.atif_dir { gateway.atif_dir = Some(value); } - if let Some(value) = session.metadata { + if let Some(value) = observability.metadata { gateway.metadata = Some(value); } - if let Some(value) = session.plugin_config { - gateway.plugin_config = Some(value); - } } -// Applies optional OpenInference export config. The nested shape mirrors the docs and leaves room -// for future exporter-specific fields without changing the top-level config parser. +// Applies optional OpenInference export config. The nested shape leaves room for future +// exporter-specific fields (e.g., `headers`, `timeout`, `protocol`) without flattening into +// a wall of `openinference_*` keys at the observability layer. fn apply_file_export_config(gateway: &mut GatewayConfig, export: Option) { let Some(export) = export else { return; @@ -528,6 +708,17 @@ fn apply_file_export_config(gateway: &mut GatewayConfig, export: Option) { + let Some(plugins) = plugins else { + return; + }; + if let Some(value) = plugins.config { + gateway.plugin_config = Some(value); + } +} + // Applies configured agent commands and Cursor's temporary-hook behavior. Cursor's // `patch_restore_hooks` flag is intentionally tri-state in file config so omitted values preserve // the safe default while explicit `false` disables temporary hook mutation. @@ -535,7 +726,7 @@ fn apply_file_agents_config(agents: &mut AgentConfigs, file_agents: Option &'static str { match self { - Self::ClaudeCode => "claude-code", + Self::ClaudeCode => "claude", Self::Codex => "codex", Self::Cursor => "cursor", Self::Hermes => "hermes", diff --git a/crates/cli/src/doctor.rs b/crates/cli/src/doctor.rs new file mode 100644 index 00000000..8e0bbaa2 --- /dev/null +++ b/crates/cli/src/doctor.rs @@ -0,0 +1,554 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! `nemo-flow doctor` — environment + config + agent + observability health check. +//! +//! Split into three layers so the data path can be unit-tested without real I/O: +//! +//! - `collect_report()` does the I/O (env probes, $PATH scans, network checks, fs writability). +//! - `DoctorReport` is the resulting pure data shape. +//! - `format_human(&report)` / `format_json(&report)` render the report. + +use std::path::PathBuf; +use std::process::Stdio; +use std::time::Duration; + +use serde::Serialize; +use tokio::time::timeout; + +use crate::config::{ + CodingAgent, GatewayConfig, ResolvedConfig, ServerArgs, resolve_server_config, +}; +use crate::error::CliError; + +const NETWORK_TIMEOUT: Duration = Duration::from_secs(2); + +/// Outcome of one check inside the doctor report. The `details` field carries human-readable +/// supplementary text; the `status` is the bottom-line signal callers (and CI) use to decide +/// pass/fail. +#[derive(Debug, Clone, Serialize, PartialEq, Eq)] +pub(crate) struct Check { + pub name: &'static str, + pub status: Status, + pub details: String, +} + +#[derive(Debug, Clone, Copy, Serialize, PartialEq, Eq)] +#[serde(rename_all = "lowercase")] +pub(crate) enum Status { + Pass, + Warn, + Fail, + /// The check ran but no relevant state was detected — purely informational (e.g. an agent + /// not on $PATH). Renders as a dim dot; not counted toward exit code. + Info, +} + +/// Snapshot of the running system that the doctor renders. Stable schema, versioned via +/// `schema_version`. Adding fields is non-breaking; removing or renaming requires a bump. +#[derive(Debug, Clone, Serialize)] +pub(crate) struct DoctorReport { + pub schema_version: u32, + pub binary_version: &'static str, + pub environment: EnvironmentInfo, + pub configuration: ConfigurationInfo, + pub agents: Vec, + pub observability: Vec, + pub completions: Vec, +} + +#[derive(Debug, Clone, Serialize)] +pub(crate) struct EnvironmentInfo { + pub os: String, + pub arch: &'static str, + pub shell: Option, +} + +#[derive(Debug, Clone, Serialize)] +pub(crate) struct ConfigurationInfo { + pub workspace: ConfigLayer, + pub global: ConfigLayer, + pub system: ConfigLayer, + pub default_agent: Option, +} + +#[derive(Debug, Clone, Serialize)] +pub(crate) struct ConfigLayer { + pub path: PathBuf, + pub status: Status, + pub details: String, +} + +#[derive(Debug, Clone, Serialize)] +pub(crate) struct AgentInfo { + pub name: &'static str, + pub path: Option, + pub version: Option, + /// Free-form annotation, e.g. "hooks: installed" once we wire up hook detection. + pub annotation: String, +} + +/// Drives all checks and produces a single `DoctorReport`. Network probes are bounded by a +/// short timeout so the command always returns quickly. Filesystem checks short-circuit on +/// the first missing directory. +pub(crate) async fn collect_report() -> Result { + let resolved = resolve_server_config(&ServerArgs::default()).unwrap_or_default(); + let cwd = std::env::current_dir().ok(); + let home = home_dir(); + + Ok(DoctorReport { + schema_version: 1, + binary_version: env!("CARGO_PKG_VERSION"), + environment: collect_environment(), + configuration: collect_configuration(cwd.as_deref(), home.as_deref()), + agents: collect_agents().await, + observability: collect_observability(&resolved.gateway).await, + completions: collect_completions(home.as_deref()), + }) +} + +fn collect_environment() -> EnvironmentInfo { + EnvironmentInfo { + os: format!("{} {}", std::env::consts::OS, os_version()), + arch: std::env::consts::ARCH, + shell: std::env::var("SHELL").ok().and_then(|path| { + std::path::Path::new(&path) + .file_name() + .map(|name| name.to_string_lossy().into_owned()) + }), + } +} + +fn os_version() -> String { + // `uname -r` works on macOS/Linux; on Windows we just report the OS name with no detail. + if cfg!(windows) { + return String::new(); + } + match std::process::Command::new("uname").arg("-r").output() { + Ok(out) if out.status.success() => String::from_utf8_lossy(&out.stdout).trim().to_string(), + _ => String::new(), + } +} + +fn collect_configuration( + cwd: Option<&std::path::Path>, + home: Option<&std::path::Path>, +) -> ConfigurationInfo { + let workspace_path = cwd + .map(|p| p.join(".nemo-flow").join("config.toml")) + .unwrap_or_else(|| PathBuf::from(".nemo-flow/config.toml")); + let global_path = home + .map(|h| h.join(".config").join("nemo-flow").join("config.toml")) + .unwrap_or_else(|| PathBuf::from("~/.config/nemo-flow/config.toml")); + let system_path = PathBuf::from("/etc/nemo-flow/config.toml"); + + ConfigurationInfo { + workspace: layer_status(&workspace_path), + global: layer_status(&global_path), + system: layer_status(&system_path), + // `default_agent` is reserved in the design for Phase 2 dispatch; not currently parsed + // out of FileConfig. Doctor reports `None` until that lands. + default_agent: None, + } +} + +fn layer_status(path: &std::path::Path) -> ConfigLayer { + if !path.exists() { + return ConfigLayer { + path: path.to_path_buf(), + status: Status::Info, + details: "not present".into(), + }; + } + match std::fs::read_to_string(path) { + // Parse as `toml::Table` to match the rest of the loader (config.rs::load_shared_config). + // `toml::Value` parsing in `toml = 0.9` treats multi-section docs as a single Value and + // chokes on the second section header, so `Table` is the right top-level shape. + Ok(text) => match text.parse::() { + Ok(_) => ConfigLayer { + path: path.to_path_buf(), + status: Status::Pass, + details: "valid".into(), + }, + Err(err) => ConfigLayer { + path: path.to_path_buf(), + status: Status::Fail, + details: format!("invalid TOML: {err}"), + }, + }, + Err(err) => ConfigLayer { + path: path.to_path_buf(), + status: Status::Fail, + details: format!("unreadable: {err}"), + }, + } +} + +async fn collect_agents() -> Vec { + let supported = [ + (CodingAgent::ClaudeCode, "claude", "claude"), + (CodingAgent::Codex, "codex", "codex"), + (CodingAgent::Cursor, "cursor", "cursor-agent"), + (CodingAgent::Hermes, "hermes", "hermes"), + ]; + let mut out = Vec::with_capacity(supported.len()); + for (_, display_name, exec) in supported { + let path = which_on_path(exec); + let version = match &path { + Some(p) => probe_version(p).await, + None => None, + }; + out.push(AgentInfo { + name: display_name, + path, + version, + annotation: String::new(), + }); + } + out +} + +fn which_on_path(exec: &str) -> Option { + let path_var = std::env::var_os("PATH")?; + std::env::split_paths(&path_var) + .map(|dir| dir.join(exec)) + .find(|candidate| candidate.is_file()) +} + +async fn probe_version(binary: &std::path::Path) -> Option { + // Spawn ` --version` and read the first line of stdout. Bounded by the network + // timeout (re-used as a generic short timeout) so a misbehaving binary doesn't hang doctor. + let mut cmd = tokio::process::Command::new(binary); + cmd.arg("--version") + .stdout(Stdio::piped()) + .stderr(Stdio::null()) + .stdin(Stdio::null()); + let child = cmd.spawn().ok()?; + let output = timeout(NETWORK_TIMEOUT, child.wait_with_output()) + .await + .ok()? + .ok()?; + let stdout = String::from_utf8_lossy(&output.stdout); + let first_line = stdout.lines().next()?.trim(); + if first_line.is_empty() { + None + } else { + Some(first_line.to_string()) + } +} + +async fn collect_observability(gateway: &GatewayConfig) -> Vec { + let mut checks = Vec::new(); + + checks.push(match &gateway.atif_dir { + None => Check { + name: "ATIF dir", + status: Status::Info, + details: "not configured".into(), + }, + Some(path) => match check_dir_writable(path) { + Ok(()) => Check { + name: "ATIF dir", + status: Status::Pass, + details: format!("{} (writable)", path.display()), + }, + Err(err) => Check { + name: "ATIF dir", + status: Status::Fail, + details: format!("{}: {err}", path.display()), + }, + }, + }); + + checks.push(match &gateway.openinference_endpoint { + None => Check { + name: "OpenInference endpoint", + status: Status::Info, + details: "not configured".into(), + }, + Some(url) => probe_http(url).await, + }); + + checks +} + +fn check_dir_writable(dir: &std::path::Path) -> Result<(), std::io::Error> { + std::fs::create_dir_all(dir)?; + let probe = dir.join(".nemo-flow-write-probe"); + std::fs::write(&probe, b"")?; + std::fs::remove_file(&probe).ok(); + Ok(()) +} + +async fn probe_http(url: &str) -> Check { + let client = match reqwest::Client::builder().timeout(NETWORK_TIMEOUT).build() { + Ok(c) => c, + Err(err) => { + return Check { + name: "OpenInference endpoint", + status: Status::Fail, + details: format!("could not build HTTP client: {err}"), + }; + } + }; + match client.get(url).send().await { + Ok(resp) => Check { + name: "OpenInference endpoint", + status: if resp.status().is_success() || resp.status().is_redirection() { + Status::Pass + } else { + Status::Warn + }, + details: format!("{} (HTTP {})", url, resp.status().as_u16()), + }, + Err(err) => Check { + name: "OpenInference endpoint", + status: Status::Fail, + details: format!("{url}: {err}"), + }, + } +} + +fn collect_completions(home: Option<&std::path::Path>) -> Vec { + let mut checks = Vec::new(); + let shell = std::env::var("SHELL").ok().and_then(|s| { + std::path::Path::new(&s) + .file_name() + .map(|name| name.to_string_lossy().into_owned()) + }); + let Some(shell_name) = shell else { + checks.push(Check { + name: "Completions", + status: Status::Info, + details: "no $SHELL set; cannot infer install location".into(), + }); + return checks; + }; + let Some(home) = home else { + checks.push(Check { + name: "Completions", + status: Status::Info, + details: format!("$SHELL={shell_name}; could not resolve home dir"), + }); + return checks; + }; + let likely_path = match shell_name.as_str() { + "zsh" => Some(home.join(".zfunc").join("_nemo-flow")), + "bash" => Some(home.join(".bash_completion.d").join("nemo-flow")), + "fish" => Some( + home.join(".config") + .join("fish") + .join("completions") + .join("nemo-flow.fish"), + ), + _ => None, + }; + match likely_path { + Some(path) if path.exists() => checks.push(Check { + name: "Completions", + status: Status::Pass, + details: format!("{shell_name}: {}", path.display()), + }), + Some(path) => checks.push(Check { + name: "Completions", + status: Status::Info, + details: format!( + "{shell_name}: not installed (run `nemo-flow completions {shell_name} > {}`)", + path.display() + ), + }), + None => checks.push(Check { + name: "Completions", + status: Status::Info, + details: format!("{shell_name}: no known completion path; run `nemo-flow completions ` to generate"), + }), + } + checks +} + +fn home_dir() -> Option { + std::env::var_os("HOME") + .or_else(|| std::env::var_os("USERPROFILE")) + .map(PathBuf::from) +} + +/// Aggregate exit code: 1 if any check is Fail, 0 otherwise. Warnings do not fail. +pub(crate) fn exit_code(report: &DoctorReport) -> u8 { + let any_fail = report + .observability + .iter() + .chain(report.completions.iter()) + .any(|c| matches!(c.status, Status::Fail)) + || matches!(report.configuration.workspace.status, Status::Fail) + || matches!(report.configuration.global.status, Status::Fail) + || matches!(report.configuration.system.status, Status::Fail); + u8::from(any_fail) +} + +/// Renders the doctor report in the fixed human-readable layout the design doc shows. Sections +/// stay in the same order across runs so users can diff across machines. The banner header lives +/// in `crate::banner::print_doctor_header` (called from `run_doctor` before this renders) so the +/// pure formatter stays banner-free for tests. +pub(crate) fn format_human(report: &DoctorReport) -> String { + let mut out = String::new(); + out.push_str(&format!("\n NeMo Flow {}\n", report.binary_version)); + out.push_str(" ─────────────────────────────────────────────\n"); + out.push_str(" Environment\n"); + out.push_str(&format!( + " OS {}\n", + report.environment.os.trim() + )); + out.push_str(&format!(" Arch {}\n", report.environment.arch)); + if let Some(shell) = &report.environment.shell { + out.push_str(&format!(" Shell {shell}\n")); + } + out.push('\n'); + + out.push_str(" Configuration\n"); + out.push_str(&format!( + " Workspace {}\n", + format_layer(&report.configuration.workspace) + )); + out.push_str(&format!( + " Global {}\n", + format_layer(&report.configuration.global) + )); + out.push_str(&format!( + " System {}\n", + format_layer(&report.configuration.system) + )); + out.push('\n'); + + out.push_str(" Agents detected\n"); + for agent in &report.agents { + match &agent.path { + Some(path) => { + let version = agent.version.as_deref().unwrap_or("(unknown version)"); + out.push_str(&format!( + " {:<8} {}\n {}\n", + agent.name, + version, + path.display() + )); + } + None => { + out.push_str(&format!(" {:<8} not on $PATH\n", agent.name)); + } + } + } + out.push('\n'); + + out.push_str(" Observability\n"); + for check in &report.observability { + out.push_str(&format!(" {:<22} {}\n", check.name, check.details)); + } + out.push('\n'); + + out.push_str(" Completions\n"); + for check in &report.completions { + out.push_str(&format!(" {}\n", check.details)); + } + out.push('\n'); + + if exit_code(report) == 0 { + out.push_str(" All checks passed.\n"); + } else { + out.push_str(" Some checks FAILED; see details above.\n"); + } + out +} + +fn format_layer(layer: &ConfigLayer) -> String { + format!("{} {}", layer.path.display(), layer.details) +} + +/// Renders the doctor report as machine-readable JSON. Versioned via `schema_version` so +/// downstream consumers (CI dashboards, eval harnesses) can detect schema changes. +pub(crate) fn format_json(report: &DoctorReport) -> Result { + serde_json::to_string_pretty(report) + .map_err(|err| CliError::Config(format!("could not serialize doctor report: {err}"))) +} + +/// Runs `agents` — a thin wrapper over `collect_agents` that emits only the agent list. Shares +/// the same JSON schema as `doctor.agents` for consistency. +pub(crate) async fn agents_report() -> Vec { + collect_agents().await +} + +/// Renders the agents listing in human form. +pub(crate) fn format_agents_human(agents: &[AgentInfo]) -> String { + let mut out = String::new(); + out.push_str("\n Supported\n"); + for agent in agents { + out.push_str(&format!(" {}\n", agent.name)); + } + out.push('\n'); + out.push_str(" Detected on this machine\n"); + let detected: Vec<&AgentInfo> = agents.iter().filter(|a| a.path.is_some()).collect(); + if detected.is_empty() { + out.push_str(" (none)\n"); + } else { + for agent in detected { + let version = agent.version.as_deref().unwrap_or("(unknown version)"); + let path = agent + .path + .as_ref() + .map(|p| p.display().to_string()) + .unwrap_or_default(); + out.push_str(&format!( + " {:<8} {}\n {}\n", + agent.name, version, path + )); + } + } + out.push('\n'); + out +} + +/// Renders the agents listing as JSON. Same shape as `DoctorReport.agents`. +pub(crate) fn format_agents_json(agents: &[AgentInfo]) -> Result { + serde_json::to_string_pretty(agents) + .map_err(|err| CliError::Config(format!("could not serialize agents report: {err}"))) +} + +/// Top-level entry point invoked by `nemo-flow doctor`. Emits to stdout and returns the +/// appropriate process exit code (0 on pass-or-warn, 1 on any failure). +pub(crate) async fn run_doctor(json: bool) -> Result { + let report = collect_report().await?; + if json { + print!("{}", format_json(&report)?); + } else { + // Banner first, then the static report. JSON mode skips both so callers parsing the + // output don't have to strip ANSI/decorations. + crate::banner::print_doctor_header(); + print!("{}", format_human(&report)); + } + match exit_code(&report) { + 0 => Ok(std::process::ExitCode::SUCCESS), + _ => Ok(std::process::ExitCode::FAILURE), + } +} + +/// Top-level entry point invoked by `nemo-flow agents`. Always exits 0; the data drives caller +/// decisions (e.g., CI gating on JSON output). +pub(crate) async fn run_agents(json: bool) -> Result { + let agents = agents_report().await; + let output = if json { + format_agents_json(&agents)? + } else { + format_agents_human(&agents) + }; + print!("{output}"); + Ok(std::process::ExitCode::SUCCESS) +} + +// `ResolvedConfig` defaults to "no settings" when no config file is present. Trait kept here +// so `unwrap_or_default()` works on the resolved config without leaking optionality into the +// rest of the doctor surface. The Default impl on `ResolvedConfig` is provided by its derive. +const _: fn() = || { + let _: ResolvedConfig = ResolvedConfig::default(); +}; + +#[cfg(test)] +#[path = "../tests/coverage/doctor_tests.rs"] +mod tests; diff --git a/crates/cli/src/gateway.rs b/crates/cli/src/gateway.rs index f00ec916..f371f99e 100644 --- a/crates/cli/src/gateway.rs +++ b/crates/cli/src/gateway.rs @@ -288,6 +288,7 @@ fn build_buffered_func( let url = prepared.upstream_url.clone(); let body_bytes = prepared.body_bytes.clone(); let headers = prepared.headers.clone(); + let route = prepared.provider; Arc::new(move |_request| { let http = http.clone(); let method = method.clone(); @@ -299,7 +300,9 @@ fn build_buffered_func( let response_bytes = response_bytes.clone(); Box::pin(async move { let response = - match forward_upstream_request(&http, &method, &url, &body_bytes, &headers).await { + match forward_upstream_request(&http, &method, &url, &body_bytes, &headers, route) + .await + { Ok(response) => response, Err(error) => { let message = error.to_string(); @@ -419,6 +422,7 @@ fn build_streaming_func( let url = prepared.upstream_url.clone(); let body_bytes = prepared.body_bytes.clone(); let headers = prepared.headers.clone(); + let route = prepared.provider; Arc::new(move |_request| { let http = http.clone(); let method = method.clone(); @@ -429,7 +433,9 @@ fn build_streaming_func( let upstream_error = upstream_error.clone(); Box::pin(async move { let response = - match forward_upstream_request(&http, &method, &url, &body_bytes, &headers).await { + match forward_upstream_request(&http, &method, &url, &body_bytes, &headers, route) + .await + { Ok(response) => response, Err(error) => { let message = error.to_string(); @@ -531,23 +537,113 @@ fn encode_sse_frame(event_json: &Value, route: ProviderRoute) -> String { } // Forwards the buffered request to the upstream provider with only the safe request headers. This -// is shared by the buffered and streaming managed funcs so header filtering stays consistent. +// is shared by the buffered and streaming managed funcs so header filtering stays consistent. When +// the inbound request carries no auth (e.g., codex with `requires_openai_auth=false` per NMF-86) +// the gateway injects the provider's API key from environment so the upstream sees authenticated +// traffic without forcing the agent to manage credentials. async fn forward_upstream_request( http: &reqwest::Client, method: &Method, url: &str, body_bytes: &Bytes, headers: &HeaderMap, + route: ProviderRoute, ) -> Result { + let sanitized = strip_chatgpt_oauth_for_openai_route(headers, route); let mut upstream = http.request(method.clone(), url).body(body_bytes.clone()); - for (name, value) in headers { + for (name, value) in &sanitized { if should_forward_request_header(name) { upstream = upstream.header(name, value); } } + upstream = inject_provider_auth(upstream, route, &sanitized); upstream.send().await } +// Removes ChatGPT-Plus OAuth JWTs from inbound `Authorization` on OpenAI routes. Codex 0.130 +// keeps sending the JWT from `~/.codex/auth.json` even when its provider override declares +// `requires_openai_auth=false`, and the JWT is a consumer token rejected by `api.openai.com` / +// LiteLLM-fronted endpoints (NVIDIA's `inference-api.nvidia.com`) with 401. By dropping the JWT +// here, `inject_provider_auth` then injects `OPENAI_API_KEY` from environment and the upstream +// sees a valid bearer token. Hermes-style clients that send a real `sk-...` API key are not +// affected — the JWT detector only triggers on `Bearer eyJ...` (base64 JSON header). Tracks +// NMF-86. +fn strip_chatgpt_oauth_for_openai_route(headers: &HeaderMap, route: ProviderRoute) -> HeaderMap { + if !matches!( + route, + ProviderRoute::OpenAiResponses + | ProviderRoute::OpenAiChatCompletions + | ProviderRoute::OpenAiModels + ) { + return headers.clone(); + } + let mut out = headers.clone(); + let looks_like_jwt = out + .get(http::header::AUTHORIZATION) + .and_then(|value| value.to_str().ok()) + .map(|value| value.starts_with("Bearer eyJ")) + .unwrap_or(false); + if looks_like_jwt { + out.remove(http::header::AUTHORIZATION); + } + out +} + +// If the inbound request has no provider auth header (Authorization / x-api-key / api-key), read +// the provider's standard API key env var and attach it to the outbound request. Tracks NMF-86: +// codex 0.130 prefers `~/.codex/auth.json` ChatGPT-Plus OAuth over `OPENAI_API_KEY` and that JWT is +// rejected by `api.openai.com`, so codex now runs with `requires_openai_auth=false` and the +// gateway owns credentials. If neither inbound auth nor the env var is present, the request is +// forwarded as-is and the upstream returns a real 401 (caller can detect and surface). +fn inject_provider_auth( + builder: reqwest::RequestBuilder, + route: ProviderRoute, + inbound: &HeaderMap, +) -> reqwest::RequestBuilder { + inject_provider_auth_with_env(builder, route, inbound, |key| std::env::var(key).ok()) +} + +// Pure variant exposed for tests. The env lookup is injected so cases can be exercised without +// mutating process env state (which races with parallel test execution). +fn inject_provider_auth_with_env( + builder: reqwest::RequestBuilder, + route: ProviderRoute, + inbound: &HeaderMap, + env_lookup: F, +) -> reqwest::RequestBuilder +where + F: Fn(&str) -> Option, +{ + let already_authed = inbound.contains_key(http::header::AUTHORIZATION) + || inbound.contains_key("x-api-key") + || inbound.contains_key("api-key") + || inbound.contains_key("anthropic-api-key"); + if already_authed { + return builder; + } + let (env_var, header_name) = match route { + ProviderRoute::OpenAiResponses + | ProviderRoute::OpenAiChatCompletions + | ProviderRoute::OpenAiModels => ("OPENAI_API_KEY", http::header::AUTHORIZATION.as_str()), + ProviderRoute::AnthropicMessages | ProviderRoute::AnthropicCountTokens => { + ("ANTHROPIC_API_KEY", "x-api-key") + } + }; + let Some(value) = env_lookup(env_var) else { + return builder; + }; + if value.is_empty() { + return builder; + } + let header_value = match route { + ProviderRoute::OpenAiResponses + | ProviderRoute::OpenAiChatCompletions + | ProviderRoute::OpenAiModels => format!("Bearer {value}"), + ProviderRoute::AnthropicMessages | ProviderRoute::AnthropicCountTokens => value, + }; + builder.header(header_name, header_value) +} + // Plain byte passthrough used for streaming routes that lack a typed codec. The managed pipeline // requires a collector + finalizer, so without a codec we keep the simpler proxy behavior and skip // the LLM lifecycle event for that single request. @@ -561,6 +657,7 @@ async fn passthrough_streaming( &prepared.upstream_url, &prepared.body_bytes, &prepared.headers, + prepared.provider, ) .await?; let status = response.status(); @@ -617,12 +714,14 @@ pub(crate) async fn models( .map(|p| p.as_str()) .unwrap_or(parts.uri.path()), ); + let sanitized = strip_chatgpt_oauth_for_openai_route(&parts.headers, provider); let mut upstream = state.http.get(upstream_url); - for (name, value) in &parts.headers { + for (name, value) in &sanitized { if should_forward_request_header(name) { upstream = upstream.header(name, value); } } + upstream = inject_provider_auth(upstream, provider, &sanitized); let upstream_response = upstream.send().await?; let status = upstream_response.status(); let headers = response_headers(upstream_response.headers()); diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index bbdf09d1..6a0e296b 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -13,8 +13,8 @@ use tokio::sync::oneshot; use tokio::task::JoinHandle; use crate::config::{ - AgentConfigs, CodingAgent, GatewayConfig, ResolvedConfig, RunCommand, ServerArgs, - resolve_run_config, + AgentConfigs, CodingAgent, EasyPathCommand, GatewayConfig, ResolvedConfig, RunCommand, + ServerArgs, any_config_file_exists, resolve_run_config, }; use crate::error::CliError; use crate::installer::{generated_hooks, hook_forward_command, merge_hooks, read_json_file}; @@ -35,6 +35,41 @@ pub(crate) async fn run( run.execute().await } +/// Runs the easy-path bare-agent shortcut (`nemo-flow claude`, `nemo-flow codex`, etc.). +/// +/// If no config file is present at any discovery layer, this fires the interactive setup inline +/// (`crate::setup::run`) which writes a `config.toml`, then proceeds to launch the agent. When +/// config IS present, the easy path constructs a synthetic `RunCommand` and delegates to the +/// same transparent-run pipeline `nemo-flow run` uses — same observability wiring, same agent +/// argv resolution, same lifecycle management. +pub(crate) async fn easy_path( + agent: CodingAgent, + command: EasyPathCommand, + inherited: Option<&ServerArgs>, +) -> Result { + if !any_config_file_exists() { + // No config anywhere — fire setup inline, scoped to the agent the user typed. After + // it returns, config discovery will pick up the freshly-written `config.toml` and + // `run()` below will see a populated environment. If setup errors (non-TTY, user + // cancelled), surface that directly. + crate::setup::run(Some(agent)).await?; + } + let synthetic = RunCommand { + agent: Some(agent), + config: None, + openai_base_url: None, + anthropic_base_url: None, + atif_dir: None, + openinference_endpoint: None, + session_metadata: None, + plugin_config: None, + dry_run: false, + print: false, + command: command.command, + }; + run(synthetic, inherited).await +} + struct TransparentRun { agent: CodingAgent, prepared: PreparedRun, @@ -84,6 +119,8 @@ impl TransparentRun { if self.dry_run { return Ok(ExitCode::SUCCESS); } + self.prepared + .print_live_status(self.agent, &self.gateway_url, &self.resolved); execute_live_run( self.listener, self.resolved.gateway, @@ -128,24 +165,35 @@ fn resolve_agent_and_argv( Ok((agent, argv)) } -// Returns the command argv supplied on the CLI, or the configured command for an explicitly selected -// agent. Empty CLI argv without `--agent` is rejected before inference because there is no executable -// name to inspect. +// Resolves the full argv to spawn. When `--agent` is set (the easy-path and explicit `--agent` +// flows both go through this case), the configured agent command is the base argv and anything +// after `--` is appended as pass-through args. When `--agent` is absent, `command.command` IS +// the full argv (e.g., `nemo-flow run -- codex --model X` runs that exact command and infers +// the agent from argv[0]). fn resolved_argv(command: &RunCommand, agents: &AgentConfigs) -> Result, CliError> { - if !command.command.is_empty() { - return Ok(command.command.clone()); + if let Some(agent) = command.agent { + let mut argv = configured_command(agent, agents) + .unwrap_or_else(|| vec![default_command_for(agent).to_string()]); + argv.extend(command.command.iter().cloned()); + return Ok(argv); } - let agent = command.agent.ok_or_else(|| { - CliError::Launch( + if command.command.is_empty() { + return Err(CliError::Launch( "missing command; pass -- or --agent with a configured command".into(), - ) - })?; - configured_command(agent, agents).ok_or_else(|| { - CliError::Launch(format!( - "no configured command for {}; pass -- ", - agent.as_arg() - )) - }) + )); + } + Ok(command.command.clone()) +} + +// Default agent binary names used when no `[agents.] command = "..."` override is in the +// resolved config. Matches the executable on $PATH that the wizard's detection probes for. +const fn default_command_for(agent: CodingAgent) -> &'static str { + match agent { + CodingAgent::ClaudeCode => "claude", + CodingAgent::Codex => "codex", + CodingAgent::Cursor => "cursor-agent", + CodingAgent::Hermes => "hermes", + } } // Uses an explicit `--agent` when present and otherwise infers the agent from argv[0]. Inference is @@ -156,7 +204,7 @@ fn resolved_agent(command: &RunCommand, argv: &[String]) -> Result= 0.129.0. fn prepare_codex(&mut self, gateway_url: &str) { + // Codex provider override now uses `requires_openai_auth=false` (see NMF-86): codex no + // longer sends credentials, the gateway injects `OPENAI_API_KEY` instead. Surface the + // missing-key state EARLY on stderr — a buried `self.notes.push` only renders under + // `--print` / `--dry-run`, which means the silent live-run case (the one users actually + // hit) would discover the missing key as a confusing 401 mid-session. + if std::env::var("OPENAI_API_KEY") + .ok() + .is_none_or(|v| v.is_empty()) + { + eprintln!( + "warning: OPENAI_API_KEY is not set. Codex routes through the NeMo Flow gateway, \ + which forwards to api.openai.com using OPENAI_API_KEY from the environment. \ + Without it the upstream will return 401. Export your key before launching codex \ + (e.g. `export OPENAI_API_KEY=sk-...`), or pass `--openai-base-url` to an upstream \ + that needs no key." + ); + } let hook_command = hook_forward_command(&transparent_hook_executable(), CodingAgent::Codex); let mut args = vec![ "--config".to_string(), @@ -412,6 +477,51 @@ impl PreparedRun { Ok(()) } + // Prints a compact pre-launch status banner so users see at a glance where their observability + // data is going (gateway URL, ATIF dir, OpenInference endpoint) before the agent's own UI takes + // over the terminal. Distinct from `print()` which is the verbose `--print` / `--dry-run` dump + // intended for inspection — this banner is always-on for live runs and wears the same + // NVIDIA-green rounded border as the intro banner so the brand frame stays consistent. + fn print_live_status(&self, agent: CodingAgent, gateway_url: &str, resolved: &ResolvedConfig) { + let mut lines: Vec = Vec::new(); + lines.push(format!("NeMo Flow → {}", agent.as_arg())); + lines.push(format!(" Gateway {gateway_url}")); + match &resolved.gateway.atif_dir { + Some(path) => lines.push(format!(" ATIF {}", path.display())), + None => lines.push(" ATIF (disabled)".to_string()), + } + match &resolved.gateway.openinference_endpoint { + Some(endpoint) => lines.push(format!(" OpenInference {endpoint}")), + None => lines.push(" OpenInference (disabled)".to_string()), + } + if !self.notes.is_empty() { + lines.push(String::new()); + for note in &self.notes { + lines.push(format!("⚠ {note}")); + } + } + + let use_color = std::io::IsTerminal::is_terminal(&std::io::stdout()) + && std::env::var_os("NO_COLOR").is_none(); + let max_w = lines.iter().map(|l| l.chars().count()).max().unwrap_or(0); + // 1-char padding on each side of the longest line. + let inner = max_w + 2; + + println!(); + print_border_line('╭', '╮', inner, use_color); + for line in &lines { + let pad = max_w - line.chars().count(); + let body = format!(" {line}{spaces} ", spaces = " ".repeat(pad)); + if use_color { + println!("\x1b[38;5;112m│\x1b[0m{body}\x1b[38;5;112m│\x1b[0m"); + } else { + println!("│{body}│"); + } + } + print_border_line('╰', '╯', inner, use_color); + println!(); + } + // Prints the resolved transparent-run plan, including dynamic gateway URL, upstream base URLs, // argv/env injection, and any agent-specific notes or temporary files. fn print(&self, agent: CodingAgent, gateway_url: &str, resolved: &ResolvedConfig) { @@ -470,12 +580,32 @@ async fn wait_for_health(gateway_url: &str) -> Result<(), CliError> { } fn codex_gateway_provider_config(gateway_url: &str) -> String { + // `wire_api="responses"` is the only value codex 0.130+ accepts; the `chat` value was + // removed (codex#7782). Codex transparent run therefore only works against upstreams that + // implement `/v1/responses` (api.openai.com or a Responses-compatible proxy). For other + // upstreams the user falls back to daemon mode + `nemo-flow install codex` and codex talks + // directly to its configured upstream — we observe hooks but not LLM calls. + // + // `requires_openai_auth=false` so codex doesn't send the ChatGPT-Plus OAuth JWT from + // `~/.codex/auth.json` (the JWT is rejected by `api.openai.com` with 401). The gateway + // injects `OPENAI_API_KEY` itself; see `gateway.rs::inject_provider_auth`. Tracks NMF-86. format!( - "model_providers.nemo-flow-openai={{name=\"NeMo Flow OpenAI\",base_url={},wire_api=\"responses\",requires_openai_auth=true,supports_websockets=false}}", + "model_providers.nemo-flow-openai={{name=\"NeMo Flow OpenAI\",base_url={},wire_api=\"responses\",requires_openai_auth=false,supports_websockets=false}}", toml_string(gateway_url) ) } +// Prints one horizontal border line for the live-status frame in NVIDIA green when color is +// enabled, otherwise plain ASCII-compatible box-drawing. +fn print_border_line(left: char, right: char, inner_width: usize, color: bool) { + let dashes = "─".repeat(inner_width); + if color { + println!("\x1b[38;5;112m{left}{dashes}{right}\x1b[0m"); + } else { + println!("{left}{dashes}{right}"); + } +} + // Returns the absolute path of the running gateway binary so injected hooks can find it // without relying on the user's `PATH`. Spawned hook subprocesses inherit the agent's // environment; in transparent run, the dev/install location of the gateway is rarely on diff --git a/crates/cli/src/main.rs b/crates/cli/src/main.rs index cf764fb1..964c18af 100644 --- a/crates/cli/src/main.rs +++ b/crates/cli/src/main.rs @@ -4,7 +4,10 @@ //! NeMo Flow coding-agent gateway CLI. mod adapters; +mod banner; +mod completions_install; mod config; +mod doctor; mod error; mod gateway; mod installer; @@ -12,12 +15,13 @@ mod launcher; mod model; mod server; mod session; +mod setup; use std::process::ExitCode; use clap::Parser; -use crate::config::{Cli, Command}; +use crate::config::{Cli, CodingAgent, Command}; #[tokio::main] // Runs the async CLI entrypoint and converts any surfaced gateway error into a non-zero process @@ -47,10 +51,68 @@ async fn run() -> Result { Ok(ExitCode::SUCCESS) } Some(Command::Run(command)) => launcher::run(command, Some(&cli.server)).await, - None => { - let config = config::resolve_server_config(&cli.server)?; - server::serve(config.gateway).await?; + Some(Command::Claude(command)) => { + launcher::easy_path(CodingAgent::ClaudeCode, command, Some(&cli.server)).await + } + Some(Command::Codex(command)) => { + launcher::easy_path(CodingAgent::Codex, command, Some(&cli.server)).await + } + Some(Command::Cursor(command)) => { + launcher::easy_path(CodingAgent::Cursor, command, Some(&cli.server)).await + } + Some(Command::Hermes(command)) => { + launcher::easy_path(CodingAgent::Hermes, command, Some(&cli.server)).await + } + Some(Command::Config(command)) => { + if command.reset { + setup::reset(command.agent)?; + } else { + setup::run(command.agent).await?; + } Ok(ExitCode::SUCCESS) } + Some(Command::Doctor(command)) => doctor::run_doctor(command.json).await, + Some(Command::Agents(command)) => doctor::run_agents(command.json).await, + Some(Command::Completions(command)) => { + if command.install { + let path = completions_install::install(command.shell)?; + println!("✓ Installed completions: {}", path.display()); + } else { + let shell = command.shell.ok_or_else(|| { + error::CliError::Config( + "missing shell argument; pass a shell name (bash, zsh, fish, ...) or \ + use `--install` to auto-detect from $SHELL" + .into(), + ) + })?; + let mut clap_command = ::command(); + clap_complete::generate( + shell, + &mut clap_command, + "nemo-flow", + &mut std::io::stdout(), + ); + } + Ok(ExitCode::SUCCESS) + } + None => { + // Bare `nemo-flow` with no subcommand: + // - If the user passed any daemon-specific flag (`--bind`, upstream URLs, ATIF dir, + // OpenInference endpoint), they obviously want the long-running gateway daemon — + // keep that path so existing scripts that explicitly invoke daemon mode stay + // compatible. + // - Otherwise — no flags, no subcommand — interpret it as "I just typed nemo-flow, + // tell me what to do" and run the setup wizard. This matches the design intent + // ("bare invocation enters guided setup") instead of failing on a port bind that + // the user never asked for. + if cli.server.requested_daemon_mode() { + let config = config::resolve_server_config(&cli.server)?; + server::serve(config.gateway).await?; + Ok(ExitCode::SUCCESS) + } else { + setup::run(None).await?; + Ok(ExitCode::SUCCESS) + } + } } } diff --git a/crates/cli/src/server.rs b/crates/cli/src/server.rs index 260cfe11..1c62a1a4 100644 --- a/crates/cli/src/server.rs +++ b/crates/cli/src/server.rs @@ -34,7 +34,23 @@ pub(crate) struct AppState { /// Tests and transparent run mode use `serve_listener` directly so they can supply an already /// bound ephemeral listener and optional shutdown channel. pub(crate) async fn serve(config: GatewayConfig) -> Result<(), CliError> { - let listener = TcpListener::bind(config.bind).await?; + let listener = TcpListener::bind(config.bind).await.map_err(|err| { + // Translate the common bind-failure (port already in use) into an actionable message. + // Plain `io error: Address already in use (os error 48)` is unhelpful; the friendly + // version names the likely cause and points at the real fixes. + if err.kind() == std::io::ErrorKind::AddrInUse { + CliError::Launch(format!( + "cannot bind {} — port is already in use. Most likely cause: another \ + `nemo-flow` daemon is already running. Fix one of:\n \ + • kill the running daemon: `pkill -f nemo-flow`\n \ + • use an ephemeral port: `nemo-flow --bind 127.0.0.1:0`\n \ + • pick a free port: `nemo-flow --bind 127.0.0.1:4041`", + config.bind + )) + } else { + CliError::Io(err) + } + })?; serve_listener(listener, config, None).await } diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs new file mode 100644 index 00000000..85b15fa7 --- /dev/null +++ b/crates/cli/src/setup.rs @@ -0,0 +1,727 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! First-run setup for `nemo-flow` configuration. +//! +//! Drives the three required prompts (scope, agents, observability backends) plus an optional +//! OpenInference endpoint follow-up, then writes a `config.toml` to the chosen scope. Pure +//! helpers (`detect_installed_agents`, `build_config`, `save_config`) are split out from the +//! `dialoguer`-driven orchestrator so the data path can be unit-tested without a TTY. + +use std::io::IsTerminal; +use std::path::{Path, PathBuf}; + +use dialoguer::theme::ColorfulTheme; +use dialoguer::{Confirm, Input, MultiSelect, Select}; +use toml_edit::{DocumentMut, Item, Table, value}; + +use crate::config::CodingAgent; +use crate::error::CliError; + +/// Where the setup saves its output. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub(crate) enum ConfigScope { + /// `./.nemo-flow/config.toml` (walked-up workspace dir). + Project, + /// `~/.config/nemo-flow/config.toml` (or `$XDG_CONFIG_HOME/nemo-flow/config.toml`). + Global, + /// Both project and global; project takes precedence per merge order. + Both, +} + +impl ConfigScope { + fn label(self) -> &'static str { + match self { + Self::Project => "project ./.nemo-flow/config.toml (recommended)", + Self::Global => "global ~/.config/nemo-flow/config.toml", + Self::Both => "both project overrides global", + } + } +} + +/// One of the built-in observability backends offered in setup. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub(crate) enum ObservabilityBackend { + /// Local ATIF trajectory files. + Atif, + /// OpenInference spans streamed to an HTTP endpoint (Phoenix, Arize, OTLP-compatible). + OpenInference, +} + +impl ObservabilityBackend { + fn label(self) -> &'static str { + match self { + Self::Atif => "ATIF trajectory files ./atif/ (recommended)", + Self::OpenInference => { + "OpenInference spans (Phoenix / Arize / OTLP)" + } + } + } +} + +/// Resolved answers from setup. Built either by `prompt_user` (interactive) or by tests. +#[derive(Debug, Clone)] +pub(crate) struct SetupAnswers { + pub scope: ConfigScope, + pub agents: Vec, + pub backends: Vec, + pub openinference_endpoint: Option, + /// Custom OpenAI-compatible upstream URL written to `[upstream] openai_base_url`. `None` + /// when the user keeps the default (`api.openai.com`) — keeps minimal configs minimal. + /// Currently surfaced by the codex setup branch; reusable by any future agent on the + /// OpenAI route family. + pub openai_base_url: Option, +} + +/// Scans `$PATH` for the supported coding-agent binaries and returns the ones present. +/// +/// The lookup uses the same set of executable names that `CodingAgent::infer` already recognizes; +/// detection is pure and deterministic given a fixed PATH so it can be exercised in tests by +/// constructing a tempdir with stub binaries and pointing `$PATH` at it. +pub(crate) fn detect_installed_agents() -> Vec { + detect_installed_agents_in(std::env::var_os("PATH").as_deref()) +} + +fn detect_installed_agents_in(path_var: Option<&std::ffi::OsStr>) -> Vec { + let Some(path_var) = path_var else { + return Vec::new(); + }; + // Pairs of (CodingAgent, exec name to look for on $PATH). + let candidates = [ + (CodingAgent::ClaudeCode, "claude"), + (CodingAgent::Codex, "codex"), + (CodingAgent::Cursor, "cursor-agent"), + (CodingAgent::Hermes, "hermes"), + ]; + candidates + .into_iter() + .filter_map(|(agent, exec)| { + let found = std::env::split_paths(path_var).any(|dir| { + let candidate = dir.join(exec); + candidate.is_file() + }); + found.then_some(agent) + }) + .collect() +} + +/// Builds the TOML document that represents the setup's answers. Pure and testable. +/// +/// The shape mirrors the schema landed in Phase 1: `[observability]`, `[export.openinference]`, +/// `[agents.]`. Sections are only emitted when the user opted into the corresponding +/// behavior so the resulting file stays minimal. +pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { + let mut doc = DocumentMut::new(); + + if answers.backends.contains(&ObservabilityBackend::Atif) { + let mut observability = Table::new(); + observability["atif_dir"] = value("./atif"); + doc["observability"] = Item::Table(observability); + } + + if answers + .backends + .contains(&ObservabilityBackend::OpenInference) + && let Some(endpoint) = answers.openinference_endpoint.as_deref() + { + let mut export = Table::new(); + let mut openinference = Table::new(); + openinference["endpoint"] = value(endpoint); + export.insert("openinference", Item::Table(openinference)); + doc["export"] = Item::Table(export); + } + + if !answers.agents.is_empty() { + let mut agents_table = Table::new(); + for agent in &answers.agents { + let (key, command) = match agent { + CodingAgent::ClaudeCode => ("claude", "claude"), + CodingAgent::Codex => ("codex", "codex"), + CodingAgent::Cursor => ("cursor", "cursor-agent"), + CodingAgent::Hermes => ("hermes", "hermes"), + }; + let mut agent_table = Table::new(); + agent_table["command"] = value(command); + agents_table.insert(key, Item::Table(agent_table)); + } + doc["agents"] = Item::Table(agents_table); + } + + if let Some(base_url) = answers.openai_base_url.as_deref() { + let mut upstream = Table::new(); + upstream["openai_base_url"] = value(base_url); + doc["upstream"] = Item::Table(upstream); + } + + doc +} + +/// Writes the setup's TOML document to the scope-appropriate path(s). +/// +/// When `merge_scope` is `Some(agent)`, an existing `config.toml` at the target path is parsed +/// and only the sections owned by THIS wizard run are replaced: `[observability]`, +/// `[export.openinference]`, `[plugins]`, and the single `[agents.]` block. Other +/// `[agents.*]` blocks are preserved. When `merge_scope` is `None`, the file is overwritten +/// outright with the wizard's full output (the user explicitly chose which agents to include). +/// +/// Returns the list of paths written. `home` and `cwd` are explicit so tests can drive this with +/// tempdirs. +pub(crate) fn save_config( + doc: &DocumentMut, + scope: ConfigScope, + cwd: &Path, + home: &Path, + merge_scope: Option, +) -> Result, CliError> { + let mut written = Vec::new(); + if matches!(scope, ConfigScope::Project | ConfigScope::Both) { + let project_dir = cwd.join(".nemo-flow"); + std::fs::create_dir_all(&project_dir)?; + let path = project_dir.join("config.toml"); + write_or_merge(&path, doc, merge_scope)?; + written.push(path); + } + if matches!(scope, ConfigScope::Global | ConfigScope::Both) { + let global_dir = home.join(".config").join("nemo-flow"); + std::fs::create_dir_all(&global_dir)?; + let path = global_dir.join("config.toml"); + write_or_merge(&path, doc, merge_scope)?; + written.push(path); + } + Ok(written) +} + +// Writes the wizard-built `doc` to `path`. When `merge_scope` is `Some(agent)` and the file +// already exists, preserves any `[agents.]` blocks while replacing the shared sections +// and the target agent's block. When `merge_scope` is `None`, just overwrites the file. +fn write_or_merge( + path: &Path, + doc: &DocumentMut, + merge_scope: Option, +) -> Result<(), CliError> { + let Some(agent) = merge_scope else { + std::fs::write(path, doc.to_string())?; + return Ok(()); + }; + if !path.exists() { + std::fs::write(path, doc.to_string())?; + return Ok(()); + } + let existing_raw = std::fs::read_to_string(path)?; + let mut existing: DocumentMut = existing_raw + .parse() + .map_err(|err| CliError::Config(format!("could not parse existing config: {err}")))?; + let agent_key = agent_key_and_command(agent).0; + merge_section(&mut existing, doc, "observability"); + merge_section(&mut existing, doc, "export"); + merge_section(&mut existing, doc, "plugins"); + merge_section(&mut existing, doc, "upstream"); + merge_agents_entry(&mut existing, doc, agent_key); + std::fs::write(path, existing.to_string())?; + Ok(()) +} + +// Copies a top-level section from `src` into `dst`, replacing any existing entry under the same +// key. If `src` does not contain the section, the existing entry in `dst` is left as-is — that +// preserves shared settings like `[upstream]` that the wizard does not touch. +fn merge_section(dst: &mut DocumentMut, src: &DocumentMut, key: &str) { + if let Some(item) = src.get(key) { + dst[key] = item.clone(); + } +} + +// Replaces the single `[agents.]` block in `dst` with the one from `src`. If `src` does +// not contain that block, the existing entry in `dst` is left as-is. +fn merge_agents_entry(dst: &mut DocumentMut, src: &DocumentMut, agent_key: &str) { + let Some(src_agent) = src + .get("agents") + .and_then(|item| item.as_table()) + .and_then(|table| table.get(agent_key)) + else { + return; + }; + if !dst.contains_key("agents") { + dst["agents"] = Item::Table(Table::new()); + } + let agents_table = dst["agents"] + .as_table_mut() + .expect("agents key was just inserted as a table"); + agents_table.insert(agent_key, src_agent.clone()); +} + +/// Removes the project `config.toml` (or just one agent's block within it). +/// +/// `agent_hint = None` deletes the whole project config file. `agent_hint = Some(agent)` parses +/// the existing file and removes only `[agents.]`, leaving every other section intact. +/// In both cases this targets the *project* layer; global and system layers are left to direct +/// editing because they typically aren't owned by the wizard. +pub(crate) fn reset(agent_hint: Option) -> Result<(), CliError> { + let cwd = std::env::current_dir()?; + let path = cwd.join(".nemo-flow").join("config.toml"); + if !path.exists() { + println!(" No project config to reset at {}", path.display()); + return Ok(()); + } + match agent_hint { + None => { + std::fs::remove_file(&path)?; + println!(" ✓ Removed {}", path.display()); + println!(" Run `nemo-flow config` to set up again."); + } + Some(agent) => { + let agent_key = agent_key_and_command(agent).0; + let raw = std::fs::read_to_string(&path)?; + let mut doc: DocumentMut = raw.parse().map_err(|err| { + CliError::Config(format!("could not parse existing config: {err}")) + })?; + if let Some(agents) = doc.get_mut("agents").and_then(Item::as_table_mut) { + if agents.remove(agent_key).is_none() { + println!( + " No `[agents.{agent_key}]` block to reset in {}", + path.display() + ); + return Ok(()); + } + // Remove the empty `[agents]` table itself so the file stays tidy when no agent + // entries remain. + if agents.is_empty() { + doc.remove("agents"); + } + } + std::fs::write(&path, doc.to_string())?; + println!(" ✓ Removed `[agents.{agent_key}]` from {}", path.display()); + } + } + Ok(()) +} + +/// Drives the interactive setup. Returns the answers so callers can either save them or feed +/// the resulting config back into a launch path. Errors when stdin isn't a TTY so non-interactive +/// callers fall back to explicit flags instead of hanging on a prompt that nobody will see. +/// +/// When `agent_hint` is `Some`, the agent multi-select is skipped — the user already declared +/// intent by typing `nemo-flow claude` (or another agent name), so respect that and only ask +/// scope + backends. To set up multiple agents, the user re-runs `nemo-flow config` later. +pub(crate) fn prompt_user( + detected_agents: &[CodingAgent], + agent_hint: Option, +) -> Result { + ensure_tty()?; + let defaults = read_existing_defaults().unwrap_or_default(); + crate::banner::print_intro(); + match agent_hint { + Some(agent) => { + let (name, _) = agent_key_and_command(agent); + println!(" Setting up observability for {name}."); + println!(" Re-run `nemo-flow config` later to configure additional agents."); + } + None => { + println!(" Let's set up observability for your coding agent."); + println!(" This runs once. Re-run later with `nemo-flow config`."); + } + } + // Only print the detected-agents listing for the unscoped wizard (`nemo-flow config`), + // where the user is about to pick from the multi-select. When the agent was already chosen + // via the easy-path shortcut (`nemo-flow codex`), listing the other three agents is noise. + if agent_hint.is_none() { + println!(); + print_detected_agents(detected_agents); + } + if defaults.has_any() { + println!(); + println!(" Existing config detected — current values are pre-selected."); + } + println!(); + // Keybinding hint shown once: dialoguer's MultiSelect needs SPACE to toggle and ENTER to + // confirm, but doesn't surface that itself. Without this line, users hit Enter expecting + // to check a box and the prompt confirms with the wrong selection. + println!( + " Tip: ↑/↓ to move, SPACE to toggle a checkbox, ENTER to confirm. Defaults are pre-selected." + ); + println!(); + + let theme = ColorfulTheme::default(); + let scope = ask_scope(&theme, defaults.scope)?; + let agents = match agent_hint { + Some(agent) => vec![agent], + None => ask_agents(&theme, detected_agents, &defaults.agents)?, + }; + let (backends, openinference_endpoint) = ask_backends(&theme, &defaults)?; + + let openai_base_url = if agents.contains(&CodingAgent::Codex) { + print_codex_api_key_guide(); + ask_openai_base_url(&theme, defaults.openai_base_url.as_deref())? + } else { + None + }; + + Ok(SetupAnswers { + scope, + agents, + backends, + openinference_endpoint, + openai_base_url, + }) +} + +/// Pre-filled wizard defaults read from an existing `config.toml`. When the file is missing or +/// unparseable the defaults are all-empty and the wizard behaves like a first-run setup. +#[derive(Debug, Clone, Default)] +struct Defaults { + scope: Option, + agents: Vec, + atif_enabled: bool, + openinference_endpoint: Option, + openai_base_url: Option, +} + +impl Defaults { + fn has_any(&self) -> bool { + self.scope.is_some() + || !self.agents.is_empty() + || self.atif_enabled + || self.openinference_endpoint.is_some() + || self.openai_base_url.is_some() + } +} + +/// Reads the highest-precedence existing config file and derives wizard defaults from it. +/// Workspace config wins over global; if both exist, scope defaults to `Both`. Missing or +/// malformed files yield `None` (the wizard then behaves as if no config existed). +fn read_existing_defaults() -> Option { + let cwd = std::env::current_dir().ok()?; + let home = home_dir(); + + let workspace_path = cwd.join(".nemo-flow").join("config.toml"); + let global_path = home + .as_ref() + .map(|h| h.join(".config").join("nemo-flow").join("config.toml")); + + let workspace_exists = workspace_path.exists(); + let global_exists = global_path.as_ref().is_some_and(|p| p.exists()); + + let read_doc = + |path: &Path| -> Option { std::fs::read_to_string(path).ok()?.parse().ok() }; + + let doc = match (workspace_exists, global_exists) { + (true, _) => read_doc(&workspace_path)?, + (false, true) => read_doc(global_path.as_ref()?)?, + (false, false) => return None, + }; + + let scope = match (workspace_exists, global_exists) { + (true, true) => Some(ConfigScope::Both), + (true, false) => Some(ConfigScope::Project), + (false, true) => Some(ConfigScope::Global), + (false, false) => None, + }; + + Some(Defaults { + scope, + agents: read_agents_from_doc(&doc), + atif_enabled: doc + .get("observability") + .and_then(|i| i.as_table()) + .and_then(|t| t.get("atif_dir")) + .is_some(), + openinference_endpoint: doc + .get("export") + .and_then(|i| i.as_table()) + .and_then(|t| t.get("openinference")) + .and_then(|i| i.as_table()) + .and_then(|t| t.get("endpoint")) + .and_then(|i| i.as_str()) + .map(str::to_string), + openai_base_url: doc + .get("upstream") + .and_then(|i| i.as_table()) + .and_then(|t| t.get("openai_base_url")) + .and_then(|i| i.as_str()) + .map(str::to_string), + }) +} + +fn read_agents_from_doc(doc: &DocumentMut) -> Vec { + let Some(table) = doc.get("agents").and_then(|i| i.as_table()) else { + return Vec::new(); + }; + let mut found = Vec::new(); + for (key, _) in table.iter() { + let agent = match key { + "claude" => Some(CodingAgent::ClaudeCode), + "codex" => Some(CodingAgent::Codex), + "cursor" => Some(CodingAgent::Cursor), + "hermes" => Some(CodingAgent::Hermes), + _ => None, + }; + if let Some(agent) = agent { + found.push(agent); + } + } + found +} + +fn print_codex_api_key_guide() { + // Codex 0.130 only accepts `wire_api="responses"` (codex#7782 removed `chat`), so codex + // transparent run requires a Responses-compatible upstream. The gateway injects the API + // key on outbound forwards (NMF-86) — user just sets OPENAI_API_KEY in their environment; + // any Bearer-token key works (OpenAI, internal proxy, etc.) as long as the upstream + // accepts it. + println!(); + println!(" ℹ Codex sends Responses-API requests through the gateway."); + println!(" The gateway injects OPENAI_API_KEY on outbound forwards. Set it before"); + println!(" launching codex: export OPENAI_API_KEY=..."); + println!(" Any Bearer-token key works (OpenAI developer key, internal proxy, etc.)"); + println!(" — the ChatGPT-Plus OAuth in ~/.codex/auth.json is NOT used."); + println!(); +} + +fn ask_openai_base_url( + theme: &ColorfulTheme, + existing: Option<&str>, +) -> Result, CliError> { + // Pre-fill with the existing `[upstream] openai_base_url` if there is one, else the OpenAI + // default. We return Some only when the user's value differs from the OpenAI default — + // matching the upstream behavior (writes minimal configs, omits the default). + let initial = existing.unwrap_or("https://api.openai.com"); + let url: String = Input::with_theme(theme) + .with_prompt("Codex upstream URL (Responses-compatible)") + .with_initial_text(initial) + .interact_text() + .map_err(setup_error)?; + if url == "https://api.openai.com" { + Ok(None) + } else { + Ok(Some(url)) + } +} + +fn ensure_tty() -> Result<(), CliError> { + if !std::io::stdin().is_terminal() { + return Err(CliError::Config( + "interactive setup requires a TTY; pass `--config ` or set up \ + `.nemo-flow/config.toml` manually" + .into(), + )); + } + Ok(()) +} + +fn print_detected_agents(detected: &[CodingAgent]) { + println!(" Detected agents on $PATH:"); + for agent in detected { + let (name, _) = agent_key_and_command(*agent); + println!(" ✓ {name}"); + } + if detected.is_empty() { + println!(" (none — you can still configure observability and add agents later)"); + } +} + +fn ask_scope( + theme: &ColorfulTheme, + existing: Option, +) -> Result { + let options = [ConfigScope::Project, ConfigScope::Global, ConfigScope::Both]; + let labels: Vec<&str> = options.iter().map(|s| s.label()).collect(); + // Cursor starts on the user's existing scope if there is one (so re-running the wizard + // doesn't accidentally relocate their config), else `Project` per the design default. + let default_idx = existing + .and_then(|s| options.iter().position(|opt| *opt == s)) + .unwrap_or(0); + let idx = Select::with_theme(theme) + .with_prompt("Save config where?") + .items(&labels) + .default(default_idx) + .interact() + .map_err(setup_error)?; + Ok(options[idx]) +} + +fn ask_agents( + theme: &ColorfulTheme, + detected: &[CodingAgent], + configured: &[CodingAgent], +) -> Result, CliError> { + let all_supported = [ + CodingAgent::ClaudeCode, + CodingAgent::Codex, + CodingAgent::Cursor, + CodingAgent::Hermes, + ]; + let labels: Vec = all_supported + .iter() + .map(|a| { + let (name, _) = agent_key_and_command(*a); + name.to_string() + }) + .collect(); + // Pre-check: union of "already in the existing config" and "detected on $PATH". The existing + // entries take precedence — if the user previously deselected an agent that's on PATH, we + // shouldn't re-check it for them. On first run (no existing config), this falls back to + // pre-checking everything detected. + let defaults: Vec = if configured.is_empty() { + all_supported.iter().map(|a| detected.contains(a)).collect() + } else { + all_supported + .iter() + .map(|a| configured.contains(a)) + .collect() + }; + let selected_idx = MultiSelect::with_theme(theme) + .with_prompt("Which agents to observe?") + .items(&labels) + .defaults(&defaults) + .interact() + .map_err(setup_error)?; + Ok(selected_idx.into_iter().map(|i| all_supported[i]).collect()) +} + +fn ask_backends( + theme: &ColorfulTheme, + existing: &Defaults, +) -> Result<(Vec, Option), CliError> { + let options = [ + ObservabilityBackend::Atif, + ObservabilityBackend::OpenInference, + ]; + let labels: Vec<&str> = options.iter().map(|b| b.label()).collect(); + // Pre-check from existing config when present. On first run, falls back to ATIF on (zero + // infra) and OpenInference off (needs an endpoint running). + let defaults = if existing.has_any() { + [ + existing.atif_enabled, + existing.openinference_endpoint.is_some(), + ] + } else { + [true, false] + }; + let selected_idx = MultiSelect::with_theme(theme) + .with_prompt("Observability backends?") + .items(&labels) + .defaults(&defaults) + .interact() + .map_err(setup_error)?; + let backends: Vec = + selected_idx.into_iter().map(|i| options[i]).collect(); + + let openinference_endpoint = if backends.contains(&ObservabilityBackend::OpenInference) { + let initial = existing + .openinference_endpoint + .as_deref() + .unwrap_or("http://localhost:6006/v1/traces"); + let endpoint: String = Input::with_theme(theme) + .with_prompt("OpenInference endpoint URL") + .with_initial_text(initial) + .interact_text() + .map_err(setup_error)?; + Some(endpoint) + } else { + None + }; + + Ok((backends, openinference_endpoint)) +} + +/// Confirms the summary with the user before writing the file. Returns true if the user accepted. +/// Shows both the destination path(s) and the exact TOML body about to be written so the user +/// can verify what they're committing to instead of confirming a path blind. +pub(crate) fn confirm_summary( + written_paths: &[PathBuf], + doc: &DocumentMut, +) -> Result { + println!(); + println!(" ─── Summary ─────────────────────────────────────────────"); + println!(" Will write to:"); + for path in written_paths { + println!(" {}", path.display()); + } + println!(); + println!(" Contents:"); + for line in doc.to_string().lines() { + println!(" {line}"); + } + println!(); + Confirm::with_theme(&ColorfulTheme::default()) + .with_prompt("Looks good?") + .default(true) + .interact() + .map_err(setup_error) +} + +fn setup_error(err: dialoguer::Error) -> CliError { + // dialoguer errors are mostly IO. Translate cancellation (Ctrl-C, EOF on stdin) into a + // friendly "cancelled" message; surface anything else as the raw error. + match err { + dialoguer::Error::IO(io_err) + if matches!( + io_err.kind(), + std::io::ErrorKind::Interrupted | std::io::ErrorKind::UnexpectedEof + ) => + { + CliError::Config("setup cancelled — no config saved".into()) + } + other => CliError::Config(format!("setup error: {other}")), + } +} + +fn agent_key_and_command(agent: CodingAgent) -> (&'static str, &'static str) { + match agent { + CodingAgent::ClaudeCode => ("claude", "claude"), + CodingAgent::Codex => ("codex", "codex"), + CodingAgent::Cursor => ("cursor", "cursor-agent"), + CodingAgent::Hermes => ("hermes", "hermes"), + } +} + +/// Top-level setup entry point used by `nemo-flow config` and the easy-path fallback. +/// Detects agents, prompts the user, writes the config, prints a final summary. +/// +/// `agent_hint` carries the agent the user typed on the easy path (`nemo-flow claude`); when +/// `Some`, the agent multi-select is skipped because intent is already declared. `None` from +/// `nemo-flow config` asks the full set so users can configure multiple agents at once. +pub(crate) async fn run(agent_hint: Option) -> Result<(), CliError> { + let detected = detect_installed_agents(); + let answers = prompt_user(&detected, agent_hint)?; + let doc = build_config(&answers); + + let cwd = std::env::current_dir()?; + let home = home_dir().ok_or_else(|| { + CliError::Config("cannot determine home directory (set $HOME or $USERPROFILE)".into()) + })?; + let preview_paths = preview_paths(answers.scope, &cwd, &home); + + if !confirm_summary(&preview_paths, &doc)? { + return Err(CliError::Config("setup cancelled — no config saved".into())); + } + + let written = save_config(&doc, answers.scope, &cwd, &home, agent_hint)?; + println!(); + println!(" ✓ Saved:"); + for path in &written { + println!(" {}", path.display()); + } + println!(); + Ok(()) +} + +fn preview_paths(scope: ConfigScope, cwd: &Path, home: &Path) -> Vec { + let mut paths = Vec::new(); + if matches!(scope, ConfigScope::Project | ConfigScope::Both) { + paths.push(cwd.join(".nemo-flow").join("config.toml")); + } + if matches!(scope, ConfigScope::Global | ConfigScope::Both) { + paths.push(home.join(".config").join("nemo-flow").join("config.toml")); + } + paths +} + +fn home_dir() -> Option { + std::env::var_os("HOME") + .or_else(|| std::env::var_os("USERPROFILE")) + .map(PathBuf::from) +} + +#[cfg(test)] +#[path = "../tests/coverage/setup_tests.rs"] +mod tests; diff --git a/crates/cli/tests/cli_tests.rs b/crates/cli/tests/cli_tests.rs index 66b6bee0..9897af41 100644 --- a/crates/cli/tests/cli_tests.rs +++ b/crates/cli/tests/cli_tests.rs @@ -21,6 +21,50 @@ fn cli_help_exits_successfully() { assert!(String::from_utf8_lossy(&output.stdout).contains("Coding-agent gateway")); } +#[test] +fn cli_help_lists_easy_path_agent_shortcuts() { + let output = Command::new(gateway_bin()).arg("--help").output().unwrap(); + let stdout = String::from_utf8_lossy(&output.stdout); + + for agent in ["claude", "codex", "cursor", "hermes"] { + assert!( + stdout.contains(&format!(" {agent}")), + "expected `--help` to list `{agent}` subcommand, got:\n{stdout}" + ); + } +} + +#[test] +fn cli_easy_path_invokes_setup_when_no_config_found() { + // When no config exists anywhere, the easy path fires setup. In a non-TTY test + // context the setup errors with a clear "requires a TTY" message; that's the contract + // we lock in here. Interactive testing of setup itself lives in the unit tests + // (build_config, save_config) since spawning real prompt UI from cargo-test is brittle. + let temp = tempfile::tempdir().unwrap(); + let xdg = temp.path().join("xdg"); + std::fs::create_dir_all(&xdg).unwrap(); + let cwd = temp.path().join("workdir"); + std::fs::create_dir_all(&cwd).unwrap(); + + let output = Command::new(gateway_bin()) + .current_dir(&cwd) + .env("XDG_CONFIG_HOME", &xdg) + .env("HOME", temp.path()) + .arg("claude") + .output() + .unwrap(); + + assert!( + !output.status.success(), + "easy path should exit non-zero when no config + no TTY for setup" + ); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("setup requires a TTY"), + "expected non-TTY setup error in stderr, got:\n{stderr}" + ); +} + #[test] fn cli_install_dry_run_plans_without_writing() { let temp = tempfile::tempdir().unwrap(); @@ -55,15 +99,15 @@ fn cli_install_dry_run_plans_without_writing() { #[test] fn cli_run_dry_run_resolves_config_and_command() { let temp = tempfile::tempdir().unwrap(); - let config = temp.path().join("gateway.toml"); + let config = temp.path().join("config.toml"); std::fs::write( &config, r#" -[server] +[upstream] openai_base_url = "http://file-openai" anthropic_base_url = "http://file-anthropic" -[session] +[observability] atif_dir = "file-atif" [export.openinference] @@ -104,17 +148,17 @@ fn cli_run_dry_run_uses_project_user_and_env_config_layers() { std::fs::create_dir_all(&nested).unwrap(); std::fs::create_dir_all(&xdg).unwrap(); std::fs::write( - project.join(".nemo-flow/gateway.toml"), + project.join(".nemo-flow/config.toml"), r#" -[server] +[upstream] openai_base_url = "http://project-openai" "#, ) .unwrap(); std::fs::write( - xdg.join("gateway.toml"), + xdg.join("config.toml"), r#" -[server] +[upstream] anthropic_base_url = "http://user-anthropic" [agents.codex] diff --git a/crates/cli/tests/coverage/completions_install_tests.rs b/crates/cli/tests/coverage/completions_install_tests.rs new file mode 100644 index 00000000..9e9bb677 --- /dev/null +++ b/crates/cli/tests/coverage/completions_install_tests.rs @@ -0,0 +1,78 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +use super::*; +use std::ffi::OsString; +use std::path::PathBuf; + +use clap_complete::Shell; + +#[test] +fn zsh_uses_zdotdir_when_set() { + let path = completion_path( + Shell::Zsh, + Some(OsString::from("/home/u")), + Some(OsString::from("/home/u/dot")), + ) + .unwrap(); + assert_eq!(path, PathBuf::from("/home/u/dot/.zfunc/_nemo-flow")); +} + +#[test] +fn zsh_falls_back_to_home_without_zdotdir() { + let path = completion_path(Shell::Zsh, Some(OsString::from("/home/u")), None).unwrap(); + assert_eq!(path, PathBuf::from("/home/u/.zfunc/_nemo-flow")); +} + +#[test] +fn bash_uses_home_dot_bash_completion_d() { + let path = completion_path(Shell::Bash, Some(OsString::from("/home/u")), None).unwrap(); + assert_eq!(path, PathBuf::from("/home/u/.bash_completion.d/nemo-flow")); +} + +#[test] +fn fish_uses_xdg_config_fish_completions() { + let path = completion_path(Shell::Fish, Some(OsString::from("/home/u")), None).unwrap(); + assert_eq!( + path, + PathBuf::from("/home/u/.config/fish/completions/nemo-flow.fish") + ); +} + +#[test] +fn powershell_is_rejected() { + let error = completion_path(Shell::PowerShell, Some(OsString::from("/home/u")), None) + .unwrap_err() + .to_string(); + assert!(error.contains("does not support"), "error was: {error}"); +} + +#[test] +fn detect_shell_recognises_known_basenames() { + assert_eq!( + detect_shell(Some(OsString::from("/bin/zsh"))).unwrap(), + Shell::Zsh + ); + assert_eq!( + detect_shell(Some(OsString::from("/usr/local/bin/bash"))).unwrap(), + Shell::Bash + ); + assert_eq!( + detect_shell(Some(OsString::from("/opt/homebrew/bin/fish"))).unwrap(), + Shell::Fish + ); +} + +#[test] +fn detect_shell_rejects_unknown_shell() { + let error = detect_shell(Some(OsString::from("/bin/tcsh"))) + .unwrap_err() + .to_string(); + assert!(error.contains("tcsh"), "error was: {error}"); +} + +#[test] +fn detect_shell_rejects_missing_shell_env() { + let error = detect_shell(None).unwrap_err().to_string(); + assert!(error.contains("$SHELL is not set"), "error was: {error}"); +} diff --git a/crates/cli/tests/coverage/config_tests.rs b/crates/cli/tests/coverage/config_tests.rs index 6f5b4a34..e31208ea 100644 --- a/crates/cli/tests/coverage/config_tests.rs +++ b/crates/cli/tests/coverage/config_tests.rs @@ -107,23 +107,25 @@ fn agent_inference_uses_executable_basename() { #[test] fn explicit_toml_config_maps_supported_sections() { let temp = tempfile::tempdir().unwrap(); - let path = temp.path().join("gateway.toml"); + let path = temp.path().join("config.toml"); std::fs::write( &path, r#" -[server] +[upstream] openai_base_url = "http://openai" anthropic_base_url = "http://anthropic" -[session] +[observability] atif_dir = "atif" metadata = { team = "obs" } -plugin_config = { components = [] } [export.openinference] endpoint = "http://otel" -[agents.claude-code] +[plugins] +config = { components = [] } + +[agents.claude] command = "claude" [agents.codex] @@ -177,14 +179,14 @@ command = "hermes --yolo chat" #[test] fn cli_run_overrides_config_values() { let temp = tempfile::tempdir().unwrap(); - let path = temp.path().join("gateway.toml"); + let path = temp.path().join("config.toml"); std::fs::write( &path, r#" -[server] +[upstream] openai_base_url = "http://file-openai" -[session] +[observability] atif_dir = "file-atif" metadata = { team = "file" } "#, @@ -214,11 +216,11 @@ metadata = { team = "file" } #[test] fn run_inherits_top_level_server_flags_when_subcommand_flags_are_absent() { let temp = tempfile::tempdir().unwrap(); - let path = temp.path().join("gateway.toml"); + let path = temp.path().join("config.toml"); std::fs::write( &path, r#" -[server] +[upstream] openai_base_url = "http://file-openai" "#, ) @@ -317,7 +319,7 @@ fn malformed_shared_config_reports_context() { assert!(error.contains("invalid TOML")); let invalid_shape = temp.path().join("invalid-shape.toml"); - std::fs::write(&invalid_shape, "server = \"not-a-table\"").unwrap(); + std::fs::write(&invalid_shape, "upstream = \"not-a-table\"").unwrap(); let args = ServerArgs { config: Some(invalid_shape), ..ServerArgs::default() @@ -331,11 +333,11 @@ fn malformed_shared_config_reports_context() { #[test] fn recursive_toml_merge_replaces_scalars_and_preserves_tables() { let mut left: toml::Value = r#" -[server] +[upstream] openai_base_url = "http://old" anthropic_base_url = "http://anthropic" -[session.metadata] +[observability.metadata] team = "old" env = "dev" "# @@ -343,10 +345,10 @@ env = "dev" .map(toml::Value::Table) .unwrap(); let right: toml::Value = r#" -[server] +[upstream] openai_base_url = "http://new" -[session.metadata] +[observability.metadata] team = "new" "# .parse::() @@ -356,13 +358,19 @@ team = "new" merge_toml(&mut left, right); assert_eq!( - left["server"]["openai_base_url"].as_str(), + left["upstream"]["openai_base_url"].as_str(), Some("http://new") ); assert_eq!( - left["server"]["anthropic_base_url"].as_str(), + left["upstream"]["anthropic_base_url"].as_str(), Some("http://anthropic") ); - assert_eq!(left["session"]["metadata"]["team"].as_str(), Some("new")); - assert_eq!(left["session"]["metadata"]["env"].as_str(), Some("dev")); + assert_eq!( + left["observability"]["metadata"]["team"].as_str(), + Some("new") + ); + assert_eq!( + left["observability"]["metadata"]["env"].as_str(), + Some("dev") + ); } diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs new file mode 100644 index 00000000..231d53b5 --- /dev/null +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -0,0 +1,170 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +use super::*; +use std::path::PathBuf; + +fn empty_report() -> DoctorReport { + DoctorReport { + schema_version: 1, + binary_version: "0.0.0-test", + environment: EnvironmentInfo { + os: "macos 25.3.0".into(), + arch: "aarch64", + shell: Some("zsh".into()), + }, + configuration: ConfigurationInfo { + workspace: ConfigLayer { + path: PathBuf::from("/x/.nemo-flow/config.toml"), + status: Status::Info, + details: "not present".into(), + }, + global: ConfigLayer { + path: PathBuf::from("/x/.config/nemo-flow/config.toml"), + status: Status::Info, + details: "not present".into(), + }, + system: ConfigLayer { + path: PathBuf::from("/etc/nemo-flow/config.toml"), + status: Status::Info, + details: "not present".into(), + }, + default_agent: None, + }, + agents: vec![], + observability: vec![], + completions: vec![], + } +} + +#[test] +fn exit_code_passes_when_no_failures() { + let report = empty_report(); + assert_eq!(exit_code(&report), 0); +} + +#[test] +fn exit_code_fails_when_observability_check_fails() { + let mut report = empty_report(); + report.observability.push(Check { + name: "ATIF dir", + status: Status::Fail, + details: "not writable".into(), + }); + assert_eq!(exit_code(&report), 1); +} + +#[test] +fn exit_code_passes_with_warn_only() { + let mut report = empty_report(); + report.observability.push(Check { + name: "OpenInference endpoint", + status: Status::Warn, + details: "HTTP 500".into(), + }); + assert_eq!(exit_code(&report), 0); +} + +#[test] +fn exit_code_fails_when_workspace_config_is_invalid() { + let mut report = empty_report(); + report.configuration.workspace.status = Status::Fail; + report.configuration.workspace.details = "invalid TOML".into(); + assert_eq!(exit_code(&report), 1); +} + +#[test] +fn format_human_emits_fixed_section_order() { + let report = empty_report(); + let rendered = format_human(&report); + + // Locking in the section order so users can diff `doctor` output across machines. + let env_idx = rendered.find("Environment").expect("Environment header"); + let cfg_idx = rendered + .find("Configuration") + .expect("Configuration header"); + let agents_idx = rendered.find("Agents detected").expect("Agents header"); + let obs_idx = rendered + .find("Observability") + .expect("Observability header"); + let comp_idx = rendered.find("Completions").expect("Completions header"); + + assert!(env_idx < cfg_idx); + assert!(cfg_idx < agents_idx); + assert!(agents_idx < obs_idx); + assert!(obs_idx < comp_idx); +} + +#[test] +fn format_human_reports_all_checks_passed_on_clean_report() { + let report = empty_report(); + let rendered = format_human(&report); + assert!(rendered.contains("All checks passed.")); +} + +#[test] +fn format_human_reports_failure_summary_when_anything_failed() { + let mut report = empty_report(); + report.observability.push(Check { + name: "ATIF dir", + status: Status::Fail, + details: "not writable".into(), + }); + let rendered = format_human(&report); + assert!(rendered.contains("Some checks FAILED")); +} + +#[test] +fn format_json_is_stable_and_versioned() { + let report = empty_report(); + let json = format_json(&report).unwrap(); + let parsed: serde_json::Value = serde_json::from_str(&json).unwrap(); + // schema_version pins the wire format. Bump only on breaking renames/removals. + assert_eq!(parsed["schema_version"], 1); + assert!(parsed["environment"]["os"].is_string()); + assert!(parsed["agents"].is_array()); +} + +#[test] +fn format_agents_human_lists_supported_and_separates_detected() { + let agents = vec![ + AgentInfo { + name: "claude", + path: Some(PathBuf::from("/opt/homebrew/bin/claude")), + version: Some("2.1.4".into()), + annotation: String::new(), + }, + AgentInfo { + name: "codex", + path: None, + version: None, + annotation: String::new(), + }, + ]; + let rendered = format_agents_human(&agents); + assert!(rendered.contains("Supported")); + assert!(rendered.contains("Detected on this machine")); + // Supported lists everything; detected only the one with a path. + assert!(rendered.contains("claude\n")); + assert!(rendered.contains("codex\n")); + assert!(rendered.contains("/opt/homebrew/bin/claude")); + // codex must NOT show up under the detected block because path is None. + let detected_block = rendered.split("Detected on this machine").nth(1).unwrap(); + assert!(!detected_block.contains("codex")); +} + +#[test] +fn format_agents_json_matches_doctor_agents_shape() { + let agents = vec![AgentInfo { + name: "claude", + path: Some(PathBuf::from("/opt/homebrew/bin/claude")), + version: Some("2.1.4".into()), + annotation: String::new(), + }]; + let json = format_agents_json(&agents).unwrap(); + let parsed: serde_json::Value = serde_json::from_str(&json).unwrap(); + assert!(parsed.is_array()); + assert_eq!(parsed[0]["name"], "claude"); + assert_eq!(parsed[0]["version"], "2.1.4"); + assert_eq!(parsed[0]["path"], "/opt/homebrew/bin/claude"); +} diff --git a/crates/cli/tests/coverage/gateway_tests.rs b/crates/cli/tests/coverage/gateway_tests.rs index 689975aa..d239508c 100644 --- a/crates/cli/tests/coverage/gateway_tests.rs +++ b/crates/cli/tests/coverage/gateway_tests.rs @@ -202,6 +202,129 @@ fn observable_headers_omit_secrets_and_transport_headers() { assert!(!observed.contains_key("connection")); } +#[test] +fn strips_chatgpt_plus_jwt_from_openai_route_inbound() { + // NMF-86: codex 0.130 still sends the ChatGPT-Plus OAuth JWT from ~/.codex/auth.json on + // outbound requests even when its provider override sets `requires_openai_auth=false`. The + // JWT is a consumer token rejected by api.openai.com / LiteLLM-fronted endpoints with 401. + // The gateway strips JWT-shaped (`Bearer eyJ...`) Authorization on OpenAI routes so the + // auth-injection path falls through and substitutes a real env-provided key. + let mut inbound = HeaderMap::new(); + inbound.insert( + "authorization", + HeaderValue::from_static("Bearer eyJhbGciOiJIUzI1NiJ9.deadbeef.signature"), + ); + let sanitized = strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses); + assert!(sanitized.get("authorization").is_none()); +} + +#[test] +fn preserves_real_bearer_keys_on_openai_route() { + // Real provider keys (Hermes's `sk-...` against NVIDIA, an actual OpenAI dev key, etc.) + // must pass through untouched — only the consumer JWT shape (`Bearer eyJ...`) is stripped. + let mut inbound = HeaderMap::new(); + inbound.insert( + "authorization", + HeaderValue::from_static("Bearer sk-real-provider-key"), + ); + let sanitized = strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses); + assert_eq!( + sanitized.get("authorization").unwrap(), + "Bearer sk-real-provider-key" + ); +} + +#[test] +fn does_not_touch_anthropic_route_authorization() { + // Defensive — the JWT shape only conflicts with OpenAI routes; Anthropic routes use + // `x-api-key` anyway. Leaving Anthropic's Authorization alone avoids any cross-provider + // edge cases. + let mut inbound = HeaderMap::new(); + inbound.insert( + "authorization", + HeaderValue::from_static("Bearer eyJ.anthropic.case"), + ); + let sanitized = + strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::AnthropicMessages); + assert!(sanitized.get("authorization").is_some()); +} + +#[test] +fn injects_openai_bearer_when_inbound_has_no_auth() { + // NMF-86 mitigation: codex now sends no credentials, so the gateway must inject + // `Authorization: Bearer ${OPENAI_API_KEY}` on outbound forwards to api.openai.com. + let http = Client::new(); + let inbound = HeaderMap::new(); + let env = |k: &str| match k { + "OPENAI_API_KEY" => Some("sk-test-123".into()), + _ => None, + }; + let builder = http.get("http://upstream/v1/responses"); + let built = + inject_provider_auth_with_env(builder, ProviderRoute::OpenAiResponses, &inbound, env) + .build() + .unwrap(); + assert_eq!( + built.headers().get("authorization").unwrap(), + "Bearer sk-test-123" + ); +} + +#[test] +fn injects_anthropic_x_api_key_for_anthropic_routes() { + let http = Client::new(); + let inbound = HeaderMap::new(); + let env = |k: &str| match k { + "ANTHROPIC_API_KEY" => Some("sk-ant-test".into()), + _ => None, + }; + let builder = http.post("http://upstream/v1/messages"); + let built = + inject_provider_auth_with_env(builder, ProviderRoute::AnthropicMessages, &inbound, env) + .build() + .unwrap(); + assert_eq!(built.headers().get("x-api-key").unwrap(), "sk-ant-test"); + // Anthropic uses `x-api-key`, not Authorization. The gateway must not duplicate the secret + // into a Bearer header — that would defeat the purpose of using the provider's standard + // auth scheme and might trigger upstream-side rejection of the conflicting auth. + assert!(built.headers().get("authorization").is_none()); +} + +#[test] +fn skips_injection_when_inbound_already_has_authorization() { + // If the agent (e.g., a future codex version, or anyone using the gateway directly) sends + // its own auth, we must not stomp on it. + let http = Client::new(); + let mut inbound = HeaderMap::new(); + inbound.insert( + "authorization", + HeaderValue::from_static("Bearer agent-supplied"), + ); + let env = |_: &str| Some("sk-test-from-env".into()); + let builder = http.post("http://upstream/v1/responses"); + let built = + inject_provider_auth_with_env(builder, ProviderRoute::OpenAiResponses, &inbound, env) + .build() + .unwrap(); + // The builder doesn't carry inbound headers itself (forward_upstream_request adds them in a + // separate loop), so the only header on `built` would be the env-injected one. Since the + // inbound had auth, we expect no injection at all. + assert!(built.headers().get("authorization").is_none()); +} + +#[test] +fn skips_injection_when_env_var_unset() { + let http = Client::new(); + let inbound = HeaderMap::new(); + let env = |_: &str| None; + let builder = http.post("http://upstream/v1/responses"); + let built = + inject_provider_auth_with_env(builder, ProviderRoute::OpenAiResponses, &inbound, env) + .build() + .unwrap(); + assert!(built.headers().get("authorization").is_none()); +} + #[tokio::test] async fn passthrough_rejects_unsupported_provider_path_directly() { let config = GatewayConfig { diff --git a/crates/cli/tests/coverage/installer_tests.rs b/crates/cli/tests/coverage/installer_tests.rs index e210f711..f85e085f 100644 --- a/crates/cli/tests/coverage/installer_tests.rs +++ b/crates/cli/tests/coverage/installer_tests.rs @@ -60,7 +60,7 @@ fn generates_claude_install_file() { json["hooks"]["PreToolUse"][0]["hooks"][0]["command"] .as_str() .unwrap() - .contains("hook-forward claude-code") + .contains("hook-forward claude") ); } @@ -250,7 +250,7 @@ fn install_writes_file_and_backs_up_existing_config() { install(command(CodingAgent::ClaudeCode, temp.path())).unwrap(); let installed = std::fs::read_to_string(&settings).unwrap(); - assert!(installed.contains("hook-forward claude-code")); + assert!(installed.contains("hook-forward claude")); let backups: Vec<_> = std::fs::read_dir(&claude_dir) .unwrap() .map(|entry| entry.unwrap().file_name().to_string_lossy().into_owned()) diff --git a/crates/cli/tests/coverage/launcher_tests.rs b/crates/cli/tests/coverage/launcher_tests.rs index 127427ce..9a3f46c1 100644 --- a/crates/cli/tests/coverage/launcher_tests.rs +++ b/crates/cli/tests/coverage/launcher_tests.rs @@ -114,11 +114,15 @@ fn inference_failure_has_actionable_message() { .unwrap_err() .to_string(); - assert!(error.contains("pass --agent claude-code")); + assert!(error.contains("pass --agent claude")); } #[test] -fn missing_configured_command_has_actionable_messages() { +fn missing_command_without_agent_errors() { + // Bare `nemo-flow run` (no command, no --agent) errors — we have nothing to spawn and no + // argv[0] to infer an agent from. With --agent set, we fall back to the agent's default + // binary name (e.g., `cursor-agent`), so that branch is exercised in the resolution test + // below rather than here. let command = RunCommand { agent: None, config: None, @@ -138,16 +142,54 @@ fn missing_configured_command_has_actionable_messages() { .to_string(); assert!(error.contains("missing command")); +} +#[test] +fn agent_without_configured_command_falls_back_to_default_binary() { + // `--agent cursor` with no `[agents.cursor] command = "..."` override resolves to the + // default executable name on $PATH (`cursor-agent` for the Cursor agent). let command = RunCommand { agent: Some(CodingAgent::Cursor), - ..command + config: None, + openai_base_url: None, + anthropic_base_url: None, + atif_dir: None, + openinference_endpoint: None, + session_metadata: None, + plugin_config: None, + dry_run: false, + print: false, + command: vec![], }; - let error = resolve_agent_and_argv(&command, &AgentConfigs::default()) - .unwrap_err() - .to_string(); - assert!(error.contains("no configured command for cursor")); + let (agent, argv) = resolve_agent_and_argv(&command, &AgentConfigs::default()).unwrap(); + assert_eq!(agent, CodingAgent::Cursor); + assert_eq!(argv, vec!["cursor-agent"]); +} + +#[test] +fn agent_with_passthrough_args_appends_to_configured_command() { + // The easy-path uses this code path: `nemo-flow codex -- --model X` resolves to the + // configured (or default) codex command with `--model X` appended. + let command = RunCommand { + agent: Some(CodingAgent::Codex), + config: None, + openai_base_url: None, + anthropic_base_url: None, + atif_dir: None, + openinference_endpoint: None, + session_metadata: None, + plugin_config: None, + dry_run: false, + print: false, + command: vec!["--model".into(), "openai/openai/gpt-5.1-codex".into()], + }; + + let (_, argv) = resolve_agent_and_argv(&command, &AgentConfigs::default()).unwrap(); + assert_eq!( + argv, + vec!["codex", "--model", "openai/openai/gpt-5.1-codex"] + ); } #[test] @@ -178,7 +220,10 @@ fn prepares_codex_config_overrides() { .iter() .any(|arg| arg.contains("model_providers.nemo-flow-openai") && arg.contains("base_url=\"http://127.0.0.1:1234\"") - && arg.contains("requires_openai_auth=true") + // NMF-86 mitigation: codex must NOT send credentials. The gateway injects + // OPENAI_API_KEY itself, so the JWT from ~/.codex/auth.json never reaches + // api.openai.com. + && arg.contains("requires_openai_auth=false") && arg.contains("supports_websockets=false")) ); assert!( @@ -502,7 +547,11 @@ async fn run_starts_gateway_injects_env_and_returns_agent_exit_code() { let output = temp.path().join("env.txt"); let command_argv = fake_agent_command(temp.path(), &output); let command = RunCommand { - agent: Some(CodingAgent::Codex), + // Leave `agent: None` so the launcher infers from argv[0] and uses `command_argv` + // (our fake-agent.sh) as the full argv. With --agent set, the resolver appends + // command as pass-through after the configured/default binary — not what this test + // wants, since it specifically asserts that argv[0] is the fake script. + agent: None, config: None, openai_base_url: None, anthropic_base_url: None, @@ -525,7 +574,11 @@ async fn run_starts_gateway_injects_env_and_returns_agent_exit_code() { #[cfg(unix)] fn fake_agent_command(temp: &Path, output: &Path) -> Vec { - let script = temp.join("fake-agent.sh"); + // Name the script `codex` (not `fake-agent.sh`) so `CodingAgent::infer` recognizes the + // argv[0] basename without us needing to set `--agent` explicitly. With `--agent` set, + // the resolver appends `command.command` as pass-through args after the configured/default + // binary — wrong for this test, which wants the fake script itself to be argv[0]. + let script = temp.join("codex"); std::fs::write( &script, format!( diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs new file mode 100644 index 00000000..6f364d6c --- /dev/null +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -0,0 +1,247 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +use super::*; +use std::os::unix::fs::PermissionsExt; + +#[test] +fn detect_installed_agents_finds_binaries_on_path() { + let temp = tempfile::tempdir().unwrap(); + // Drop stub binaries for two of the four supported agents — confirming detection picks up + // only the ones present and ignores the others. + for exec in ["claude", "cursor-agent"] { + let path = temp.path().join(exec); + std::fs::write(&path, "#!/bin/sh\nexit 0\n").unwrap(); + std::fs::set_permissions(&path, std::fs::Permissions::from_mode(0o755)).unwrap(); + } + + // SAFETY: we restore PATH on drop via the guard below. Tests are not run concurrently + // within the same binary by default (cargo test --jobs 1 for parallel safety), and we + // do not assert on agent ordering or unrelated PATH entries. + let original_path = std::env::var_os("PATH"); + unsafe { + std::env::set_var("PATH", temp.path()); + } + + let detected = detect_installed_agents(); + assert!(detected.contains(&CodingAgent::ClaudeCode)); + assert!(detected.contains(&CodingAgent::Cursor)); + assert!(!detected.contains(&CodingAgent::Codex)); + assert!(!detected.contains(&CodingAgent::Hermes)); + + unsafe { + if let Some(value) = original_path { + std::env::set_var("PATH", value); + } else { + std::env::remove_var("PATH"); + } + } +} + +#[test] +fn build_config_emits_observability_section_when_atif_selected() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![], + backends: vec![ObservabilityBackend::Atif], + openinference_endpoint: None, + openai_base_url: None, + }; + + let doc = build_config(&answers); + let rendered = doc.to_string(); + + assert!(rendered.contains("[observability]")); + assert!(rendered.contains(r#"atif_dir = "./atif""#)); + assert!(!rendered.contains("[export")); +} + +#[test] +fn build_config_emits_export_section_when_openinference_selected() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![], + backends: vec![ObservabilityBackend::OpenInference], + openinference_endpoint: Some("http://localhost:6006/v1/traces".into()), + openai_base_url: None, + }; + + let doc = build_config(&answers); + let rendered = doc.to_string(); + + assert!(rendered.contains("[export.openinference]")); + assert!(rendered.contains(r#"endpoint = "http://localhost:6006/v1/traces""#)); +} + +#[test] +fn build_config_skips_empty_sections_when_no_backends_selected() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![], + backends: vec![], + openinference_endpoint: None, + openai_base_url: None, + }; + + let doc = build_config(&answers); + let rendered = doc.to_string(); + + assert!(!rendered.contains("[observability]")); + assert!(!rendered.contains("[export")); + assert!(!rendered.contains("[agents]")); +} + +#[test] +fn build_config_emits_agents_block_with_user_facing_keys() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![CodingAgent::ClaudeCode, CodingAgent::Codex], + backends: vec![], + openinference_endpoint: None, + openai_base_url: None, + }; + + let doc = build_config(&answers); + let rendered = doc.to_string(); + + // Agent keys match the user-facing CLI shortcut names (`claude`, not `claude-code`). + assert!(rendered.contains("[agents.claude]")); + assert!(rendered.contains(r#"command = "claude""#)); + assert!(rendered.contains("[agents.codex]")); + assert!(rendered.contains(r#"command = "codex""#)); +} + +#[test] +fn build_config_writes_upstream_block_for_custom_openai_base_url() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![CodingAgent::Codex], + backends: vec![ObservabilityBackend::Atif], + openinference_endpoint: None, + openai_base_url: Some("https://litellm.internal/v1".into()), + }; + let rendered = build_config(&answers).to_string(); + assert!(rendered.contains("[upstream]")); + assert!(rendered.contains(r#"openai_base_url = "https://litellm.internal/v1""#)); +} + +#[test] +fn build_config_omits_upstream_block_when_openai_base_url_is_none() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![CodingAgent::Codex], + backends: vec![ObservabilityBackend::Atif], + openinference_endpoint: None, + openai_base_url: None, + }; + let rendered = build_config(&answers).to_string(); + assert!(!rendered.contains("[upstream]")); +} + +#[test] +fn save_config_writes_project_scope_to_workspace_dir() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![CodingAgent::ClaudeCode], + backends: vec![ObservabilityBackend::Atif], + openinference_endpoint: None, + openai_base_url: None, + }; + let doc = build_config(&answers); + let temp = tempfile::tempdir().unwrap(); + let home = tempfile::tempdir().unwrap(); + + let written = save_config(&doc, ConfigScope::Project, temp.path(), home.path(), None).unwrap(); + + assert_eq!(written.len(), 1); + assert_eq!(written[0], temp.path().join(".nemo-flow/config.toml")); + let contents = std::fs::read_to_string(&written[0]).unwrap(); + assert!(contents.contains("[observability]")); + assert!(contents.contains("[agents.claude]")); +} + +#[test] +fn save_config_scoped_merge_preserves_other_agents() { + // Seed an existing config with claude AND codex blocks, plus a custom [upstream] that the + // wizard does not touch. Then "re-run" the wizard scoped to claude and assert codex + + // upstream survive while claude is updated and observability is written fresh. + let temp = tempfile::tempdir().unwrap(); + let home = tempfile::tempdir().unwrap(); + let project_dir = temp.path().join(".nemo-flow"); + std::fs::create_dir_all(&project_dir).unwrap(); + let existing_path = project_dir.join("config.toml"); + std::fs::write( + &existing_path, + r#"[upstream] +openai_base_url = "http://old-openai" + +[agents.claude] +command = "old-claude-binary" + +[agents.codex] +command = "codex --full-auto" +"#, + ) + .unwrap(); + + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![CodingAgent::ClaudeCode], + backends: vec![ObservabilityBackend::Atif], + openinference_endpoint: None, + openai_base_url: None, + }; + let doc = build_config(&answers); + save_config( + &doc, + ConfigScope::Project, + temp.path(), + home.path(), + Some(CodingAgent::ClaudeCode), + ) + .unwrap(); + + let merged = std::fs::read_to_string(&existing_path).unwrap(); + // Wizard-owned sections are replaced with the new doc's content. + assert!(merged.contains("[observability]")); + assert!(merged.contains("[agents.claude]")); + assert!(merged.contains(r#"command = "claude""#)); + // Other agents and untouched sections survive. + assert!( + merged.contains("[agents.codex]"), + "expected scoped merge to preserve [agents.codex], got:\n{merged}" + ); + assert!( + merged.contains("codex --full-auto"), + "expected scoped merge to preserve codex command, got:\n{merged}" + ); + assert!( + merged.contains("http://old-openai"), + "expected scoped merge to preserve untouched [upstream], got:\n{merged}" + ); + // Old claude command should be gone. + assert!( + !merged.contains("old-claude-binary"), + "expected scoped merge to overwrite [agents.claude].command, got:\n{merged}" + ); +} + +#[test] +fn save_config_writes_both_scopes_when_both_selected() { + let answers = SetupAnswers { + scope: ConfigScope::Both, + agents: vec![], + backends: vec![ObservabilityBackend::Atif], + openinference_endpoint: None, + openai_base_url: None, + }; + let doc = build_config(&answers); + let cwd = tempfile::tempdir().unwrap(); + let home = tempfile::tempdir().unwrap(); + + let written = save_config(&doc, ConfigScope::Both, cwd.path(), home.path(), None).unwrap(); + + assert_eq!(written.len(), 2); + assert!(written.iter().any(|p| p.starts_with(cwd.path()))); + assert!(written.iter().any(|p| p.starts_with(home.path()))); +} diff --git a/docs/integrate-frameworks/coding-agent-claude-code.md b/docs/integrate-frameworks/coding-agent-claude-code.md index 03e74fb0..aa155683 100644 --- a/docs/integrate-frameworks/coding-agent-claude-code.md +++ b/docs/integrate-frameworks/coding-agent-claude-code.md @@ -37,27 +37,27 @@ nemo-flow run \ If a launcher hides the command name, pass the agent explicitly: ```bash -nemo-flow run --agent claude-code -- my-claude-wrapper +nemo-flow run --agent claude -- my-claude-wrapper ``` ## Shared Config -Create `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Create `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } [export.openinference] endpoint = "http://127.0.0.1:4318/v1/traces" -[agents.claude-code] +[agents.claude] command = "claude" ``` -Then run `nemo-flow run --agent claude-code` to use the configured +Then run `nemo-flow run --agent claude` to use the configured command. User config takes priority over project and global config. ## Persistent Install @@ -110,7 +110,7 @@ Then check that hook forwarding reaches the gateway: ```bash curl -f http://127.0.0.1:4040/healthz printf '{"session_id":"smoke-claude","hook_event_name":"SessionStart"}' \ - | NEMO_FLOW_GATEWAY_URL=http://127.0.0.1:4040 nemo-flow hook-forward claude-code --fail-closed + | NEMO_FLOW_GATEWAY_URL=http://127.0.0.1:4040 nemo-flow hook-forward claude --fail-closed ``` The response should be valid Claude Code hook JSON. For most lifecycle events it diff --git a/docs/integrate-frameworks/coding-agent-codex.md b/docs/integrate-frameworks/coding-agent-codex.md index 80b9a543..6b6a68e4 100644 --- a/docs/integrate-frameworks/coding-agent-codex.md +++ b/docs/integrate-frameworks/coding-agent-codex.md @@ -50,14 +50,14 @@ nemo-flow run --agent codex -- my-codex-wrapper ## Shared Config -Create `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Create `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[server] +[upstream] openai_base_url = "https://api.openai.com" -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } diff --git a/docs/integrate-frameworks/coding-agent-cursor.md b/docs/integrate-frameworks/coding-agent-cursor.md index 3aa59b9f..4bfd03a8 100644 --- a/docs/integrate-frameworks/coding-agent-cursor.md +++ b/docs/integrate-frameworks/coding-agent-cursor.md @@ -50,11 +50,11 @@ nemo-flow run --agent cursor -- my-cursor-wrapper ## Shared Config -Create `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Create `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } diff --git a/docs/integrate-frameworks/coding-agent-gateway.md b/docs/integrate-frameworks/coding-agent-gateway.md index bd29ab5c..1c631ebf 100644 --- a/docs/integrate-frameworks/coding-agent-gateway.md +++ b/docs/integrate-frameworks/coding-agent-gateway.md @@ -86,27 +86,29 @@ and project config. CLI flags and environment variables override file config. Config file locations are: -- `/etc/nemo-flow/gateway.toml` -- `.nemo-flow/gateway.toml` -- `$XDG_CONFIG_HOME/nemo-flow/gateway.toml` -- `~/.config/nemo-flow/gateway.toml` +- `/etc/nemo-flow/config.toml` +- `.nemo-flow/config.toml` +- `$XDG_CONFIG_HOME/nemo-flow/config.toml` +- `~/.config/nemo-flow/config.toml` Example: ```toml -[server] +[upstream] openai_base_url = "https://api.openai.com" anthropic_base_url = "https://api.anthropic.com" -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } -plugin_config = { components = [] } + +[plugins] +config = { components = [] } [export.openinference] endpoint = "http://127.0.0.1:4318/v1/traces" -[agents.claude-code] +[agents.claude] command = "claude" [agents.codex] diff --git a/docs/integrate-frameworks/coding-agent-hermes.md b/docs/integrate-frameworks/coding-agent-hermes.md index 53cd79db..342b91e2 100644 --- a/docs/integrate-frameworks/coding-agent-hermes.md +++ b/docs/integrate-frameworks/coding-agent-hermes.md @@ -49,11 +49,11 @@ nemo-flow run --agent hermes -- my-hermes-wrapper ## Shared Config -Create `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Create `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } diff --git a/integrations/coding-agents/README.md b/integrations/coding-agents/README.md index 9d245284..e2409082 100644 --- a/integrations/coding-agents/README.md +++ b/integrations/coding-agents/README.md @@ -50,7 +50,7 @@ nemo-flow run --atif-dir .nemo-flow/atif -- cursor-agent nemo-flow run --atif-dir .nemo-flow/atif -- hermes ``` -Use `--agent claude-code|codex|cursor|hermes` when a wrapper hides the agent +Use `--agent claude|codex|cursor|hermes` when a wrapper hides the agent command name. Use `--dry-run --print` to inspect generated config without launching. @@ -58,13 +58,13 @@ Hermes transparent runs export the dynamic `NEMO_FLOW_GATEWAY_URL`, but Hermes hooks still need to be installed or approved in Hermes configuration before they can call the gateway. -Shared TOML config is loaded from `/etc/nemo-flow/gateway.toml`, then nearest -project `.nemo-flow/gateway.toml`, then -`$XDG_CONFIG_HOME/nemo-flow/gateway.toml` or -`~/.config/nemo-flow/gateway.toml`. +Shared TOML config is loaded from `/etc/nemo-flow/config.toml`, then nearest +project `.nemo-flow/config.toml`, then +`$XDG_CONFIG_HOME/nemo-flow/config.toml` or +`~/.config/nemo-flow/config.toml`. ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } diff --git a/integrations/coding-agents/claude-code/README.md b/integrations/coding-agents/claude-code/README.md index f7a65dc6..19fa7d52 100644 --- a/integrations/coding-agents/claude-code/README.md +++ b/integrations/coding-agents/claude-code/README.md @@ -16,7 +16,7 @@ same local hook and gateway controls as Claude Code. - `.claude-plugin/plugin.json` describes the Claude Code hook package. - `hooks/hooks.json` contains hook entries that run - `nemo-flow hook-forward claude-code`. + `nemo-flow hook-forward claude`. ## Captured Events @@ -54,22 +54,22 @@ nemo-flow run \ ## Shared Config -Use `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Use `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } -[agents.claude-code] +[agents.claude] command = "claude" ``` Then run: ```bash -nemo-flow run --agent claude-code +nemo-flow run --agent claude ``` ## Persistent Setup @@ -112,7 +112,7 @@ For a direct endpoint smoke test against a manually started gateway: ```bash curl -f http://127.0.0.1:4040/healthz printf '{"session_id":"smoke-claude","hook_event_name":"SessionStart"}' \ - | NEMO_FLOW_GATEWAY_URL=http://127.0.0.1:4040 nemo-flow hook-forward claude-code --fail-closed + | NEMO_FLOW_GATEWAY_URL=http://127.0.0.1:4040 nemo-flow hook-forward claude --fail-closed ``` If hooks arrive but LLM spans are missing, confirm the Claude Code process was diff --git a/integrations/coding-agents/claude-code/hooks/hooks.json b/integrations/coding-agents/claude-code/hooks/hooks.json index 99e9e439..d8f0e825 100644 --- a/integrations/coding-agents/claude-code/hooks/hooks.json +++ b/integrations/coding-agents/claude-code/hooks/hooks.json @@ -5,7 +5,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -16,7 +16,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -28,7 +28,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -40,7 +40,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -52,7 +52,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -64,7 +64,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -75,7 +75,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -86,7 +86,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -97,7 +97,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -108,7 +108,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -119,7 +119,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -130,7 +130,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] @@ -141,7 +141,7 @@ "hooks": [ { "type": "command", - "command": "nemo-flow hook-forward claude-code", + "command": "nemo-flow hook-forward claude", "timeout": 30 } ] diff --git a/integrations/coding-agents/codex/README.md b/integrations/coding-agents/codex/README.md index 65154b6b..a8f89364 100644 --- a/integrations/coding-agents/codex/README.md +++ b/integrations/coding-agents/codex/README.md @@ -63,11 +63,11 @@ nemo-flow run \ ## Shared Config -Use `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Use `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } diff --git a/integrations/coding-agents/cursor/README.md b/integrations/coding-agents/cursor/README.md index 89fba9d3..ddcd7c94 100644 --- a/integrations/coding-agents/cursor/README.md +++ b/integrations/coding-agents/cursor/README.md @@ -58,11 +58,11 @@ nemo-flow run \ ## Shared Config -Use `.nemo-flow/gateway.toml` for project defaults or -`~/.config/nemo-flow/gateway.toml` for user defaults: +Use `.nemo-flow/config.toml` for project defaults or +`~/.config/nemo-flow/config.toml` for user defaults: ```toml -[session] +[observability] atif_dir = ".nemo-flow/atif" metadata = { team = "agent-observability" } From 732f6507ec205af5be94c46a69bc4a3839abc245 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 02:24:44 -0700 Subject: [PATCH 02/15] feat(cli): bordered animated banner with docked version tag Slanted ANSI-Shadow figlet inside a rounded NVIDIA-green border. Smooth tracer dot curves over NeMo, dips, glides under Flow, then settles as a dim-green vX.Y.Z (sourced from CARGO_PKG_VERSION) at bottom-right. Uses partial-redraw via DEC save/restore so the figlet never re-flickers. Signed-off-by: Ajay Thorve --- crates/cli/src/banner.rs | 362 ++++++++++++++++++++++ crates/cli/tests/coverage/banner_tests.rs | 110 +++++++ 2 files changed, 472 insertions(+) create mode 100644 crates/cli/src/banner.rs create mode 100644 crates/cli/tests/coverage/banner_tests.rs diff --git a/crates/cli/src/banner.rs b/crates/cli/src/banner.rs new file mode 100644 index 00000000..e59af4ff --- /dev/null +++ b/crates/cli/src/banner.rs @@ -0,0 +1,362 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Slanted ANSI-Shadow "NeMo Flow" banner with a tracer dot that curves over the brand. +//! +//! Static art: filled block letters in NVIDIA green, each row shifted one column right of the +//! row above for an italic lean. Animation: a single bright dot enters from the top-left, +//! glides smoothly horizontally above "NeMo", dips through the gap between "NeMo" and "Flow", +//! glides horizontally below "Flow", and the banner then settles with a small "vX.Y.Z" tag in +//! green at the bottom-right. +//! +//! Three entry points: +//! - [`print_intro`] — wizard intro / bare `nemo-flow` (animated) +//! - [`print_doctor_header`] — settled static frame for `doctor` (no animation) +//! - [`render_frame`] — pure helper for tests + +use std::io::{IsTerminal, Write}; +use std::time::Duration; + +/// Filled-block NeMo Flow figlet with a per-row right shift so the letters lean italic. Six +/// content rows; the renderer prepends one blank row above and appends one below to host the +/// tracer dot's path. +const BANNER_LINES: &[&str] = &[ + " ███╗ ██╗███████╗███╗ ███╗ ██████╗ ███████╗██╗ ██████╗ ██╗ ██╗", + " ████╗ ██║██╔════╝████╗ ████║██╔═══██╗ ██╔════╝██║ ██╔═══██╗██║ ██║", + " ██╔██╗ ██║█████╗ ██╔████╔██║██║ ██║ █████╗ ██║ ██║ ██║██║ █╗██║", + " ██║╚██╗██║██╔══╝ ██║╚██╔╝██║██║ ██║ ██╔══╝ ██║ ██║ ██║██║██║██║", + " ██║ ╚████║███████╗██║ ╚═╝ ██║╚██████╔╝ ██║ ███████╗╚██████╔╝╚███╔███╔╝", + " ╚═╝ ╚═══╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝ ╚═════╝ ╚══╝╚══╝", +]; + +/// Banner geometry (visual rows including the dot's top and bottom rails). +const FIGLET_ROWS: usize = 6; +const TOP_RAIL: usize = 0; +const BOTTOM_RAIL: usize = FIGLET_ROWS + 1; // row index of the row below the figlet +const TOTAL_ROWS: usize = FIGLET_ROWS + 2; // top rail + 6 figlet rows + bottom rail + +/// Tracer-dot path waypoints — measured in columns. The dot moves linearly in col across +/// frames; its row follows an S-shape (top rail → smooth descent → bottom rail) based on +/// which segment the column falls into. +const COL_START: usize = 13; // above the "N" of NeMo +const COL_END: usize = 92; // right edge below "Flow" +const COL_DIP_START: usize = 44; // start descending after we clear "NeMo" +const COL_DIP_END: usize = 56; // finish descending before we hit "Flow" + +const MIN_WIDTH: usize = 105; + +// NVIDIA green on the figlet text and the surrounding border. The tracer head is a bright +// mint-green dot. The settled docked tag at bottom-right is dim green to read as a quiet +// version label without competing with the brand mark. +const NVIDIA_GREEN: &str = "\x1b[38;5;112m"; +const DOT_HEAD: &str = "\x1b[1;38;5;121m"; +const DOCK_TAG: &str = "\x1b[2;38;5;112m"; +const RESET: &str = "\x1b[0m"; + +// Rounded border glyphs. Drawn in NVIDIA green around the whole banner. +const BORDER_TL: char = '╭'; +const BORDER_TR: char = '╮'; +const BORDER_BL: char = '╰'; +const BORDER_BR: char = '╯'; +const BORDER_H: char = '─'; +const BORDER_V: char = '│'; + +fn supports_banner() -> bool { + if !std::io::stdout().is_terminal() { + return false; + } + if std::env::var_os("NO_COLOR").is_some() { + return false; + } + if std::env::var("CI").is_ok_and(|v| v == "true" || v == "1") { + return false; + } + if std::env::var("TERM").as_deref() == Ok("dumb") { + return false; + } + terminal_width().is_some_and(|w| w >= MIN_WIDTH) +} + +fn terminal_width() -> Option { + if !std::io::stdout().is_terminal() { + return None; + } + std::env::var("COLUMNS") + .ok() + .and_then(|v| v.parse::().ok()) + .or(Some(120)) +} + +/// Total animation frames for the tracer dot's traversal. Drives both the timing in +/// `animate_reveal` and the path-step helper used by tests. Higher count = smoother glide. +pub(crate) const TRACER_FRAMES: usize = 160; + +/// Returns the tracer dot's `(row, col)` at the given frame. The dot moves linearly in `col` +/// from `COL_START` to `COL_END` and follows an S-shape in `row`: stays on the top rail until +/// it has cleared "NeMo", smoothly descends through the gap, then stays on the bottom rail +/// until it exits below "Flow". `None` when the animation has finished. +pub(crate) fn tracer_position(frame: usize) -> Option<(usize, usize)> { + if frame >= TRACER_FRAMES { + return None; + } + let t = frame as f32 / (TRACER_FRAMES - 1).max(1) as f32; + let col = COL_START as f32 + (COL_END - COL_START) as f32 * t; + let col_usize = col as usize; + let row = if col_usize <= COL_DIP_START { + TOP_RAIL as f32 + } else if col_usize >= COL_DIP_END { + BOTTOM_RAIL as f32 + } else { + // Smooth ease (smoothstep) between top rail and bottom rail across the dip range. + let local = (col_usize - COL_DIP_START) as f32 / (COL_DIP_END - COL_DIP_START) as f32; + let eased = local * local * (3.0 - 2.0 * local); + TOP_RAIL as f32 + (BOTTOM_RAIL - TOP_RAIL) as f32 * eased + }; + Some((row.round() as usize, col_usize)) +} + +/// Pure renderer. `tracer` carries the dot's (row, col) for this frame, or `None` to render +/// the settled static banner. `color=false` strips all ANSI escapes. +pub(crate) fn render_frame(tracer: Option<(usize, usize)>, color: bool) -> String { + render_frame_inner(tracer, color, false) +} + +/// Settled frame with a glowing "● vX.Y.Z" tag docked at the bottom-right under "Flow". Used +/// after the animation finishes and as the static frame for the doctor header. +pub(crate) fn render_docked_frame(color: bool) -> String { + render_frame_inner(None, color, true) +} + +fn render_frame_inner(tracer: Option<(usize, usize)>, color: bool, docked: bool) -> String { + let mut out = String::with_capacity(BANNER_LINES.iter().map(|l| l.len() + 64).sum()); + out.push('\n'); + + // Build a 2D grid: empty top rail, the 6 figlet rows, empty bottom rail. Each cell is a + // single char (we treat Unicode block chars as 1 display column wide, which is true for the + // glyphs the figlet uses). + let mut grid: Vec> = Vec::with_capacity(TOTAL_ROWS); + let dock_tag = format!(" v{}", env!("CARGO_PKG_VERSION")); + let dock_width_needed = COL_END + dock_tag.chars().count() + 2; + let max_width = BANNER_LINES + .iter() + .map(|l| l.chars().count()) + .max() + .unwrap_or(0) + .max(dock_width_needed); + + // Top rail (empty). + grid.push(vec![' '; max_width]); + // 6 figlet rows, padded to max_width. + for line in BANNER_LINES { + let mut row: Vec = line.chars().collect(); + while row.len() < max_width { + row.push(' '); + } + grid.push(row); + } + // Bottom rail (empty). + grid.push(vec![' '; max_width]); + + // Overlay the docked version tag at bottom-right: just "vX.Y.Z" in dim green. No dot — the + // version reads as a quiet label below "Flow", letting the brand mark stand on its own. + let dock_col_start = COL_END; + let dock_col_end = dock_col_start + dock_tag.chars().count(); + if docked { + let dock_row = BOTTOM_RAIL; + for (i, ch) in dock_tag.chars().enumerate() { + let c = dock_col_start + i; + if dock_row < grid.len() && c < grid[dock_row].len() { + grid[dock_row][c] = ch; + } + } + } + + // Overlay the tracer head only — no trail. Smooth motion comes from the higher frame count. + if let Some((row, col)) = tracer + && row < grid.len() + && col < grid[row].len() + { + grid[row][col] = '●'; + } + + // Top border row. + push_border_line(&mut out, BORDER_TL, BORDER_TR, max_width, color); + + // Emit the grid with appropriate coloring per cell. Each grid row is wrapped with a + // vertical border on the left and right, painted in NVIDIA green. + for (row_idx, row) in grid.iter().enumerate() { + if color { + out.push_str(NVIDIA_GREEN); + out.push(BORDER_V); + out.push_str(RESET); + } else { + out.push(BORDER_V); + } + for (col_idx, ch) in row.iter().enumerate() { + let in_dock_tag = docked + && row_idx == BOTTOM_RAIL + && col_idx >= dock_col_start + && col_idx < dock_col_end; + if in_dock_tag && *ch != ' ' { + if color { + out.push_str(DOCK_TAG); + out.push(*ch); + out.push_str(RESET); + } else { + out.push(*ch); + } + } else if Some((row_idx, col_idx)) == tracer && *ch == '●' { + if color { + out.push_str(DOT_HEAD); + out.push(*ch); + out.push_str(RESET); + } else { + out.push('*'); + } + } else if is_figlet_glyph(*ch) { + if color { + out.push_str(NVIDIA_GREEN); + out.push(*ch); + out.push_str(RESET); + } else { + out.push(*ch); + } + } else { + out.push(*ch); + } + } + if color { + out.push_str(NVIDIA_GREEN); + out.push(BORDER_V); + out.push_str(RESET); + } else { + out.push(BORDER_V); + } + out.push('\n'); + } + + // Bottom border row. + push_border_line(&mut out, BORDER_BL, BORDER_BR, max_width, color); + + out +} + +fn push_border_line(out: &mut String, left: char, right: char, inner_width: usize, color: bool) { + if color { + out.push_str(NVIDIA_GREEN); + out.push(left); + for _ in 0..inner_width { + out.push(BORDER_H); + } + out.push(right); + out.push_str(RESET); + } else { + out.push(left); + for _ in 0..inner_width { + out.push(BORDER_H); + } + out.push(right); + } + out.push('\n'); +} + +fn is_figlet_glyph(ch: char) -> bool { + matches!(ch, '█' | '╗' | '╔' | '╝' | '╚' | '═' | '║') +} + +pub(crate) fn print_intro() { + if !supports_banner() { + print_plain_header(); + return; + } + animate_reveal(); +} + +pub(crate) fn print_doctor_header() { + if !supports_banner() { + print_plain_header(); + return; + } + print!("{}", render_docked_frame(true)); +} + +fn animate_reveal() { + // Smoothness strategy: + // 1. Print the static banner ONCE so the figlet never flickers. + // 2. Save cursor (DEC ESC 7), then per-frame restore + move-up + move-to-col to repaint + // just the dot cell. Erasing + repainting one cell is far cheaper than redrawing the + // full banner each frame and reads as continuous motion. + // 3. Skip frames where the integer column hasn't advanced — we'd just sleep and redraw + // the same cell, wasting time and breaking the perceived pace. + let frame_ms = 8u64; + let mut stdout = std::io::stdout(); + let _ = write!(stdout, "\x1b[?25l"); + // Paint the static banner. Cursor lands on the line just below the bottom rail. + let _ = write!(stdout, "{}", render_frame(None, true)); + // Save cursor position so each frame can restore back to this anchor before navigating. + let _ = write!(stdout, "\x1b7"); + let _ = stdout.flush(); + + let mut last_pos: Option<(usize, usize)> = None; + for f in 0..TRACER_FRAMES { + let Some((row, col)) = tracer_position(f) else { + break; + }; + // Skip duplicate-column frames — keeps motion paced even though we still sleep. + if last_pos == Some((row, col)) { + std::thread::sleep(Duration::from_millis(frame_ms)); + continue; + } + // Erase the previous dot (write a space at the old position). + if let Some((pr, pc)) = last_pos { + paint_cell(&mut stdout, pr, pc, ' ', None); + } + // Draw the current dot. + paint_cell(&mut stdout, row, col, '●', Some(DOT_HEAD)); + let _ = stdout.flush(); + last_pos = Some((row, col)); + std::thread::sleep(Duration::from_millis(frame_ms)); + } + + // Settle: erase the last dot and stamp the version tag at the dock spot. + if let Some((pr, pc)) = last_pos { + paint_cell(&mut stdout, pr, pc, ' ', None); + } + let dock_tag = format!(" v{}", env!("CARGO_PKG_VERSION")); + // Move to (BOTTOM_RAIL, COL_END) inside the border and write the dim-green tag. Anchor sits + // below the bottom border line; +1 vertical for the border, +1 horizontal for the left + // border. + let _ = write!(stdout, "\x1b8"); // restore to anchor below banner + let _ = write!(stdout, "\x1b[{}A", TOTAL_ROWS - BOTTOM_RAIL + 1); + let _ = write!(stdout, "\x1b[{}G", COL_END + 2); + let _ = write!(stdout, "{DOCK_TAG}{dock_tag}{RESET}"); + let _ = write!(stdout, "\x1b8"); + let _ = write!(stdout, "\x1b[?25h"); + let _ = stdout.flush(); +} + +/// Paint a single character at grid (row, col) relative to the anchor saved by `\x1b7` after +/// the static banner was printed. Accounts for the surrounding border: +1 row offset for the +/// bottom border line and +1 column for the left border. `color` is an optional SGR prefix +/// (RESET is always emitted after the char). Cursor is left at the anchor. +fn paint_cell(out: &mut std::io::Stdout, row: usize, col: usize, ch: char, color: Option<&str>) { + let _ = write!(out, "\x1b8"); + let _ = write!(out, "\x1b[{}A", TOTAL_ROWS - row + 1); + let _ = write!(out, "\x1b[{}G", col + 2); + if let Some(c) = color { + let _ = write!(out, "{c}{ch}{RESET}"); + } else { + let _ = write!(out, "{ch}"); + } +} + +fn print_plain_header() { + let version = env!("CARGO_PKG_VERSION"); + println!(); + println!(" NeMo Flow v{version}"); + println!(); +} + +#[cfg(test)] +#[path = "../tests/coverage/banner_tests.rs"] +mod tests; diff --git a/crates/cli/tests/coverage/banner_tests.rs b/crates/cli/tests/coverage/banner_tests.rs new file mode 100644 index 00000000..eaf78633 --- /dev/null +++ b/crates/cli/tests/coverage/banner_tests.rs @@ -0,0 +1,110 @@ +// SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +use super::*; + +#[test] +fn render_frame_settled_contains_figlet_glyphs() { + let frame = render_frame(None, false); + // ANSI Shadow figlet uses filled blocks and box-drawing corners. + assert!(frame.contains('█'), "frame missing figlet block glyph"); + assert!( + frame.contains('╗') || frame.contains('╔'), + "frame missing figlet corners" + ); +} + +#[test] +fn render_frame_plain_mode_has_no_ansi_escapes() { + let frame = render_frame(None, false); + assert!( + !frame.contains('\x1b'), + "plain mode should emit no ANSI escapes" + ); +} + +#[test] +fn render_frame_color_mode_emits_nvidia_green() { + let frame = render_frame(None, true); + assert!(frame.contains("\x1b[38;5;112m")); + assert!(frame.contains("\x1b[0m")); +} + +#[test] +fn render_frame_tracer_overlay_inserts_dot_at_position() { + // Pick a position on the top rail (row 0) that's empty in the static art. + let frame_with = render_frame(Some((0, 14)), true); + let frame_without = render_frame(None, true); + assert!( + frame_with.contains('●'), + "tracer should render a `●` head when overlay is active" + ); + assert!( + !frame_without.contains('●'), + "settled frame (no tracer) should not include the dot glyph" + ); +} + +#[test] +fn render_frame_tracer_plain_mode_uses_ascii_star() { + let frame = render_frame(Some((0, 14)), false); + assert!( + frame.contains('*'), + "plain mode tracer head should render as `*` (ASCII star)" + ); + assert!( + !frame.contains('●'), + "plain mode should not emit Unicode dot" + ); +} + +#[test] +fn tracer_position_starts_on_top_rail_and_ends_on_bottom_rail() { + let (r0, _c0) = tracer_position(0).expect("frame 0 should have a position"); + assert_eq!(r0, 0, "tracer starts on the top rail"); + + let (r_last, c_last) = + tracer_position(TRACER_FRAMES - 1).expect("last animated frame should have a position"); + assert!( + r_last >= 6, + "tracer should descend to the bottom rail by the last frame" + ); + assert!( + c_last >= 80, + "tracer should travel close to the right edge by the last frame" + ); +} + +#[test] +fn tracer_position_is_none_after_animation_ends() { + assert!(tracer_position(TRACER_FRAMES).is_none()); + assert!(tracer_position(TRACER_FRAMES + 100).is_none()); +} + +#[test] +fn frame_is_wrapped_with_rounded_border() { + let frame = render_frame(None, false); + // Four corner glyphs and the side bars must appear. + assert!(frame.contains('╭'), "missing top-left corner"); + assert!(frame.contains('╮'), "missing top-right corner"); + assert!(frame.contains('╰'), "missing bottom-left corner"); + assert!(frame.contains('╯'), "missing bottom-right corner"); + assert!(frame.contains('│'), "missing vertical border"); + assert!(frame.contains('─'), "missing horizontal border"); +} + +#[test] +fn docked_frame_includes_version_tag() { + let frame = render_docked_frame(false); + let version = env!("CARGO_PKG_VERSION"); + let expected = format!("v{version}"); + assert!( + frame.contains(&expected), + "docked frame should include the version tag `{expected}`" + ); + // No bullet dot before the version — settled state is just the green text label. + assert!( + !frame.contains('●'), + "docked frame should not include a bullet dot before the version" + ); +} From 8902b8f5e9f3b06fe4fa43511718362efe6fddfb Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 02:51:23 -0700 Subject: [PATCH 03/15] feat(cli): drop install subcommand, fold hermes prep into setup Hermes hooks now land via `nemo-flow config` (the only agent that needs persistent files); claude/codex/cursor stay wrapper-ephemeral. Docs updated to drop `nemo-flow install` references. Signed-off-by: Ajay Thorve --- crates/cli/src/config.rs | 63 +-- crates/cli/src/installer.rs | 361 +----------------- crates/cli/src/launcher.rs | 24 +- crates/cli/src/main.rs | 4 - crates/cli/src/setup.rs | 87 ++++- crates/cli/tests/cli_tests.rs | 31 -- crates/cli/tests/coverage/installer_tests.rs | 246 +----------- crates/cli/tests/coverage/launcher_tests.rs | 4 +- crates/cli/tests/coverage/setup_tests.rs | 57 ++- .../coding-agent-claude-code.md | 21 +- .../coding-agent-codex.md | 15 +- .../coding-agent-cursor.md | 19 +- .../coding-agent-gateway.md | 51 +-- .../coding-agent-hermes.md | 39 +- integrations/coding-agents/README.md | 65 +--- .../coding-agents/claude-code/README.md | 19 +- integrations/coding-agents/codex/README.md | 16 +- integrations/coding-agents/cursor/README.md | 17 +- 18 files changed, 268 insertions(+), 871 deletions(-) diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index 6a9bf7ec..082fdfd7 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -64,11 +64,10 @@ pub(crate) enum Command { Cursor(EasyPathCommand), /// Run Hermes with observability (setup on first use) #[command( - long_about = "Run NVIDIA's Hermes agent under a NeMo Flow gateway. Unlike the other \ - agents, Hermes is typically run with persistent shell hooks (install via \ - `nemo-flow install hermes`) and a long-running gateway daemon on a fixed \ - port. The Hermes config (`~/.hermes/config.yaml`) must point its \ - `model.base_url` at that daemon.", + long_about = "Run NVIDIA's Hermes agent under a NeMo Flow gateway. Hermes reads hooks \ + from `.hermes/config.yaml`; first-run setup writes that file alongside \ + `.nemo-flow/config.toml` so every subsequent invocation traces \ + automatically. Re-run `nemo-flow config hermes` to refresh the hooks.", after_help = "Examples:\n \ nemo-flow hermes\n \ nemo-flow hermes -- chat --provider custom" @@ -84,8 +83,6 @@ pub(crate) enum Command { Completions(CompletionsCommand), /// Run an agent deterministically (no wizard; errors if config is missing) Run(RunCommand), - /// Install persistent hooks into an agent's own config directory (advanced) - Install(InstallCommand), /// Internal: subprocess used by installed hooks to forward events. Not typed by humans. #[command(hide = true)] HookForward(HookForwardCommand), @@ -191,38 +188,6 @@ pub(crate) struct GatewayConfig { pub(crate) plugin_config: Option, } -#[derive(Debug, Clone, Args)] -pub(crate) struct InstallCommand { - #[arg(value_enum)] - pub(crate) agent: CodingAgent, - #[arg(long, value_enum, default_value = "user")] - pub(crate) scope: InstallScope, - #[arg(long, value_enum, default_value = "both")] - pub(crate) target: InstallTarget, - #[arg(long, default_value = "http://127.0.0.1:4040")] - pub(crate) gateway_url: String, - #[arg(long)] - pub(crate) atif_dir: Option, - #[arg(long)] - pub(crate) openinference_endpoint: Option, - #[arg(long)] - pub(crate) profile: Option, - #[arg(long)] - pub(crate) session_metadata: Option, - #[arg(long)] - pub(crate) plugin_config: Option, - #[arg(long, value_enum)] - pub(crate) gateway_mode: Option, - #[arg(long)] - pub(crate) dry_run: bool, - #[arg(long)] - pub(crate) print: bool, - #[arg(long, hide = true)] - pub(crate) home_dir: Option, - #[arg(long, hide = true)] - pub(crate) project_dir: Option, -} - #[derive(Debug, Clone, Args)] pub(crate) struct HookForwardCommand { #[arg(value_enum)] @@ -297,21 +262,6 @@ pub(crate) enum CodingAgent { Hermes, } -#[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)] -#[value(rename_all = "kebab-case")] -pub(crate) enum InstallScope { - User, - Project, -} - -#[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)] -#[value(rename_all = "kebab-case")] -pub(crate) enum InstallTarget { - Cli, - Gui, - Both, -} - #[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)] #[value(rename_all = "kebab-case")] pub(crate) enum GatewayMode { @@ -374,6 +324,9 @@ pub(crate) struct AgentConfigs { #[derive(Debug, Clone, Default)] pub(crate) struct AgentCommandConfig { pub(crate) command: Option, + /// Recorded by `nemo-flow config` when it installs hermes shell hooks. Other agents leave + /// this empty; the launcher reads it only to print a "hooks live here" pointer for hermes. + pub(crate) hooks_path: Option, } #[derive(Debug, Clone)] @@ -453,6 +406,7 @@ struct FileAgentsConfig { #[derive(Debug, Clone, Default, Deserialize)] struct FileAgentCommandConfig { command: Option, + hooks_path: Option, } #[derive(Debug, Clone, Default, Deserialize)] @@ -740,6 +694,7 @@ fn apply_file_agents_config(agents: &mut AgentConfigs, file_agents: Option Result<(), CliError> { - validate_optional_json("session metadata", command.session_metadata.as_deref())?; - validate_optional_json("plugin config", command.plugin_config.as_deref())?; - let files = planned_files(&command)?; - if command.print { - print_planned_files(&files); - } - if command.dry_run { - print_dry_run_summary(&command); - return Ok(()); - } - write_planned_files(&files)?; - print_target_note(command.agent, command.target); - Ok(()) -} - -// Prints planned file contents in the same format used by installer dry-run tests. The trailing -// newline fix keeps concatenated file previews readable even when serialized contents lack one. -fn print_planned_files(files: &[PlannedFile]) { - for file in files { - println!("--- {}", file.path.display()); - print!("{}", file.contents); - if !file.contents.ends_with('\n') { - println!(); - } - } -} - -// Prints the install summary without touching the filesystem. Keeping this separate from the write -// path makes the `install` control flow read as validate, plan, preview, then mutate-or-return. -fn print_dry_run_summary(command: &InstallCommand) { - println!( - "Dry run: would install {} integration for {:?} {:?}.", - command.agent.as_arg(), - command.scope, - command.target - ); -} - -// Writes every planned file with backup behavior handled by `write_planned_file`. This helper -// centralizes the success output so per-file write semantics stay consistent across agents. -fn write_planned_files(files: &[PlannedFile]) -> Result<(), CliError> { - for file in files { - write_planned_file(file)?; - println!("Installed {}", file.path.display()); - } - Ok(()) -} - /// Forwards a hook payload from an installed shell command to a running gateway. /// /// Empty stdin is normalized to `{}` so hooks that provide no payload still generate observable @@ -244,7 +181,7 @@ async fn handle_hook_forward_response( } // Chooses the gateway URL for hook-forward. Hermes prefers the runtime environment URL because -// its hooks are commonly installed persistently but reused by `run --agent hermes` with an +// its hooks are installed persistently by setup but reused under `nemo-flow hermes` with an // ephemeral gateway; other agents prefer the installed command URL for stable configuration. fn resolve_hook_gateway_url( agent: CodingAgent, @@ -252,200 +189,11 @@ fn resolve_hook_gateway_url( env_url: Option, ) -> Option { match agent { - // Hermes shell hooks are installed persistently, but `run --agent hermes` - // starts an ephemeral gateway and passes the live URL through env. CodingAgent::Hermes => env_url.or(command_url), _ => command_url.or(env_url), } } -// Builds the exact files that would be written for an install command. Each agent keeps its native -// config format: Claude/Cursor/Codex hook JSON, Codex feature TOML, and Hermes YAML translated -// through the shared JSON hook merge logic. -fn planned_files(command: &InstallCommand) -> Result, CliError> { - let base = install_base(command)?; - match command.agent { - CodingAgent::ClaudeCode => planned_claude_file(command, &base), - CodingAgent::Codex => planned_codex_files(command, &base), - CodingAgent::Cursor => planned_cursor_file(command, &base), - CodingAgent::Hermes => planned_hermes_file(command, &base), - } -} - -// Plans the Claude settings file by merging generated hook groups into existing JSON settings. -// Claude's plugin-dir transparent mode uses a separate temporary file path handled by launcher. -fn planned_claude_file( - command: &InstallCommand, - base: &Path, -) -> Result, CliError> { - let path = base.join(".claude/settings.json"); - Ok(vec![planned_json_hooks_file( - path, - claude_hooks(&hook_command(command, CodingAgent::ClaudeCode)), - )?]) -} - -// Plans both Codex files: feature enablement in TOML and generated hook groups in JSON. The TOML -// merge intentionally leaves unrelated provider configuration untouched. -fn planned_codex_files( - command: &InstallCommand, - base: &Path, -) -> Result, CliError> { - let config_path = base.join(".codex/config.toml"); - let hooks_path = base.join(".codex/hooks.json"); - let existing_config = read_optional_text_file(&config_path)?; - Ok(vec![ - PlannedFile { - path: config_path.clone(), - contents: merge_codex_config(&existing_config)?, - }, - planned_json_hooks_file( - hooks_path, - codex_hooks(&hook_command(command, CodingAgent::Codex)), - )?, - ]) -} - -// Plans Cursor's project hook file using the shared JSON hook merge behavior. Cursor transparent -// runs patch and restore this same path dynamically instead of writing persistent config. -fn planned_cursor_file( - command: &InstallCommand, - base: &Path, -) -> Result, CliError> { - let path = base.join(".cursor/hooks.json"); - Ok(vec![planned_json_hooks_file( - path, - cursor_hooks(&hook_command(command, CodingAgent::Cursor)), - )?]) -} - -// Plans Hermes YAML config by translating through the shared hook map format. Missing files are -// treated as empty config, while unreadable files fail rather than overwriting user state. -fn planned_hermes_file( - command: &InstallCommand, - base: &Path, -) -> Result, CliError> { - let path = base.join(".hermes/config.yaml"); - let existing = read_optional_text_file(&path)?; - let contents = merge_hermes_config( - &existing, - hermes_hooks(&hook_command(command, CodingAgent::Hermes)), - )?; - Ok(vec![PlannedFile { path, contents }]) -} - -// Reads an optional text file for config formats where missing files are valid install targets. -// Non-not-found I/O errors still propagate to avoid losing existing user configuration. -fn read_optional_text_file(path: &Path) -> Result { - match std::fs::read_to_string(path) { - Ok(raw) => Ok(raw), - Err(error) if error.kind() == std::io::ErrorKind::NotFound => Ok(String::new()), - Err(error) => Err(CliError::Io(error)), - } -} - -// Produces a planned JSON hook file by reading existing JSON, merging generated hooks, and -// formatting the result consistently with the package hook bundles. -fn planned_json_hooks_file(path: PathBuf, generated: Value) -> Result { - let existing = read_json_file(&path)?; - let contents = serde_json::to_string_pretty(&merge_hooks(existing, generated)?) - .map_err(|error| CliError::Install(error.to_string()))?; - Ok(PlannedFile { path, contents }) -} - -// Resolves the installation root according to user or project scope. Hidden test-only overrides -// take precedence so coverage can avoid touching real home/project directories. -fn install_base(command: &InstallCommand) -> Result { - match command.scope { - InstallScope::User => command - .home_dir - .clone() - .or_else(home_dir) - .ok_or_else(|| CliError::Install("could not resolve home directory".into())), - InstallScope::Project => command - .project_dir - .clone() - .map(Ok) - .unwrap_or_else(std::env::current_dir) - .map_err(CliError::from), - } -} - -// Builds the shell command persisted into hook configuration. Optional gateway settings are turned -// into hook-forward flags and every argument is shell-quoted because most target hook systems store -// the command as a single shell string. -fn hook_command(command: &InstallCommand, agent: CodingAgent) -> String { - let mut args = vec![ - "nemo-flow".to_string(), - "hook-forward".to_string(), - agent.as_arg().to_string(), - "--gateway-url".to_string(), - command.gateway_url.clone(), - ]; - push_optional_path(&mut args, "--atif-dir", command.atif_dir.as_deref()); - push_optional( - &mut args, - "--openinference-endpoint", - command.openinference_endpoint.as_deref(), - ); - push_optional(&mut args, "--profile", command.profile.as_deref()); - push_optional( - &mut args, - "--session-metadata", - command.session_metadata.as_deref(), - ); - push_optional( - &mut args, - "--plugin-config", - command.plugin_config.as_deref(), - ); - push_optional_gateway_mode(&mut args, command.gateway_mode); - args.into_iter() - .map(|arg| shell_quote(&arg)) - .collect::>() - .join(" ") -} - -// Appends a flag/value pair only when a string option is present, preserving omission semantics in -// generated hook commands instead of serializing empty values. -fn push_optional(args: &mut Vec, flag: &str, value: Option<&str>) { - if let Some(value) = value { - args.push(flag.to_string()); - args.push(value.to_string()); - } -} - -// Appends optional path flags using display formatting because installed commands are read by a -// shell, not by Rust path parsers. -fn push_optional_path(args: &mut Vec, flag: &str, value: Option<&Path>) { - if let Some(value) = value { - args.push(flag.to_string()); - args.push(value.display().to_string()); - } -} - -// Serializes the gateway-mode enum into the generated hook-forward command only when explicitly -// configured, leaving default runtime behavior under the gateway's normal config resolution. -fn push_optional_gateway_mode(args: &mut Vec, gateway_mode: Option) { - if let Some(gateway_mode) = gateway_mode { - args.push("--gateway-mode".to_string()); - args.push(gateway_mode.as_arg().to_string()); - } -} - -// Quotes a shell argument only when necessary. The safe character set is intentionally small so -// paths and URLs remain readable while whitespace, quotes, and shell metacharacters are protected. -fn shell_quote(value: &str) -> String { - if value - .chars() - .all(|character| character.is_ascii_alphanumeric() || "-_./:=,".contains(character)) - { - value.to_string() - } else { - format!("'{}'", value.replace('\'', "'\\''")) - } -} - /// Generates native hook configuration for the selected agent. /// /// The returned value always has a top-level `hooks` object, but Hermes uses its simpler command @@ -483,7 +231,7 @@ fn cursor_hooks(command: &str) -> Value { // Generates Hermes YAML-compatible hook groups. Hermes expects direct command entries rather than // the nested `type = command` group format used by Claude, Codex, and Cursor. -fn hermes_hooks(command: &str) -> Value { +pub(crate) fn hermes_hooks(command: &str) -> Value { let hooks: serde_json::Map = HERMES_HOOK_EVENTS .iter() .map(|event| { @@ -561,7 +309,7 @@ pub(crate) fn merge_hooks(existing: Value, generated: Value) -> Result Result { match existing { Value::Null => Ok(json!({})), @@ -593,7 +341,7 @@ fn generated_hooks_object(generated: &Value) -> Result<&serde_json::Map, event: &str, @@ -614,26 +362,9 @@ fn merge_event_hook_groups( Ok(()) } -// Enables Codex hook support in TOML without rewriting unrelated config. Empty config creates a -// new document; malformed TOML fails before any install writes occur. -fn merge_codex_config(existing: &str) -> Result { - let mut document = if existing.trim().is_empty() { - DocumentMut::new() - } else { - existing - .parse::() - .map_err(|error| CliError::Install(format!("invalid TOML: {error}")))? - }; - if !document.as_table().contains_key("features") { - document["features"] = table(); - } - document["features"]["codex_hooks"] = value(true); - Ok(document.to_string()) -} - -// Parses Hermes YAML, merges generated hooks through the shared JSON hook merger, and serializes -// back to YAML. Empty files are treated as no existing configuration. -fn merge_hermes_config(existing: &str, generated: Value) -> Result { +/// Parses Hermes YAML, merges generated hooks through the shared JSON hook merger, and serializes +/// back to YAML. Empty input is treated as no existing configuration. +pub(crate) fn merge_hermes_config(existing: &str, generated: Value) -> Result { let existing = if existing.trim().is_empty() { Value::Null } else { @@ -647,7 +378,7 @@ fn merge_hermes_config(existing: &str, generated: Value) -> Result Result { match std::fs::read_to_string(path) { Ok(raw) => serde_json::from_str(&raw).map_err(|error| { @@ -658,44 +389,8 @@ pub(crate) fn read_json_file(path: &Path) -> Result { } } -// Writes one planned file, creating parents and backing up any existing file first. Backup naming -// is delegated to `backup_path` so the original extension is preserved in the backup filename. -fn write_planned_file(file: &PlannedFile) -> Result<(), CliError> { - if let Some(parent) = file.path.parent() { - std::fs::create_dir_all(parent)?; - } - if file.path.exists() { - std::fs::copy(&file.path, backup_path(&file.path)?)?; - } - std::fs::write(&file.path, &file.contents)?; - Ok(()) -} - -// Builds a timestamped backup path beside the original file. If a file has no extension, `config` -// is used so backup names remain recognizable. -fn backup_path(path: &Path) -> Result { - let timestamp = SystemTime::now() - .duration_since(UNIX_EPOCH) - .map_err(|error| CliError::Install(error.to_string()))? - .as_secs(); - Ok(path.with_extension(format!( - "{}.bak.{timestamp}", - path.extension() - .and_then(|extension| extension.to_str()) - .unwrap_or("config") - ))) -} - -// Resolves a cross-platform home directory from environment variables only, matching config -// resolution and keeping installer tests isolated through env/test overrides. -fn home_dir() -> Option { - std::env::var_os("HOME") - .or_else(|| std::env::var_os("USERPROFILE")) - .map(PathBuf::from) -} - -// Validates optional JSON strings before they are embedded into generated hook-forward commands or -// headers. This catches quoting/config mistakes during install rather than during a later hook run. +// Validates optional JSON strings before they are embedded into hook-forward headers. Catches +// quoting/config mistakes at hook-fire time rather than after the request reaches the gateway. fn validate_optional_json(name: &str, value: Option<&str>) -> Result<(), CliError> { if let Some(value) = value { serde_json::from_str::(value) @@ -754,7 +449,7 @@ fn insert_header( } // Converts an optional filesystem path to a header value using loss-tolerant display text. This -// mirrors installed shell command behavior, where paths are passed as strings. +// mirrors hook-forward behavior, where paths are passed as strings. fn insert_header_path( headers: &mut HeaderMap, name: &'static str, @@ -768,34 +463,6 @@ fn insert_header_path( } } -// Prints agent/target-specific follow-up notes for limitations that cannot be encoded directly in -// hook files, such as GUI/cloud behavior or Hermes consent requirements. -fn print_target_note(agent: CodingAgent, target: InstallTarget) { - match (agent, target) { - (CodingAgent::ClaudeCode, InstallTarget::Gui | InstallTarget::Both) => { - println!( - "Note: Claude application/web sessions are not configured by Claude Code hooks." - ); - } - (CodingAgent::Codex, InstallTarget::Gui | InstallTarget::Both) => { - println!( - "Note: Codex GUI local sessions can use local config; cloud tasks need separate gateway support." - ); - } - (CodingAgent::Cursor, InstallTarget::Cli | InstallTarget::Both) => { - println!( - "Note: run the Cursor CLI smoke test to confirm cursor-agent loads hooks in your version." - ); - } - (CodingAgent::Hermes, InstallTarget::Cli | InstallTarget::Both) => { - println!( - "Note: Hermes shell hooks prefer NEMO_FLOW_GATEWAY_URL at runtime when set; otherwise they use the installed gateway URL. Hook consent is still required unless approved interactively or through Hermes configuration." - ); - } - _ => {} - } -} - #[cfg(test)] #[path = "../tests/coverage/installer_tests.rs"] mod tests; diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index 6a0e296b..ff8efc71 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -300,7 +300,7 @@ impl PreparedRun { } } } - CodingAgent::Hermes => run.prepare_hermes(), + CodingAgent::Hermes => run.prepare_hermes(resolved.agents.hermes.hooks_path.as_deref()), } Ok(run) } @@ -423,12 +423,18 @@ impl PreparedRun { Ok(()) } - // Notes Hermes' persistent-hook requirement. Hermes hook approval is outside this launcher, so - // run mode only exports the live gateway URL for hooks that are already installed and approved. - fn prepare_hermes(&mut self) { - self.notes.push( - "Hermes shell hooks must be configured with `nemo-flow install hermes`; this run exports the dynamic gateway URL for approved hooks".into(), - ); + // Surfaces where hermes' shell hooks live so users know what `nemo-flow config hermes` wrote. + // Hermes reads hooks from .hermes/config.yaml on its own; this launcher only exports the live + // gateway URL via NEMO_FLOW_GATEWAY_URL so installed hooks reach the ephemeral gateway. + fn prepare_hermes(&mut self, hooks_path: Option<&std::path::Path>) { + let note = match hooks_path { + Some(path) => format!( + "Hermes hooks at {} — re-run `nemo-flow config hermes` to refresh.", + path.display() + ), + None => "Hermes hooks not yet installed — run `nemo-flow config hermes` once so hermes traces under this gateway.".into(), + }; + self.notes.push(note); } // Spawns the prepared child process with injected environment and waits for its exit status. @@ -583,8 +589,8 @@ fn codex_gateway_provider_config(gateway_url: &str) -> String { // `wire_api="responses"` is the only value codex 0.130+ accepts; the `chat` value was // removed (codex#7782). Codex transparent run therefore only works against upstreams that // implement `/v1/responses` (api.openai.com or a Responses-compatible proxy). For other - // upstreams the user falls back to daemon mode + `nemo-flow install codex` and codex talks - // directly to its configured upstream — we observe hooks but not LLM calls. + // upstreams the user falls back to daemon mode and points codex directly at its configured + // upstream — we observe hooks but not LLM calls. // // `requires_openai_auth=false` so codex doesn't send the ChatGPT-Plus OAuth JWT from // `~/.codex/auth.json` (the JWT is rejected by `api.openai.com` with 401). The gateway diff --git a/crates/cli/src/main.rs b/crates/cli/src/main.rs index 964c18af..5914aa53 100644 --- a/crates/cli/src/main.rs +++ b/crates/cli/src/main.rs @@ -42,10 +42,6 @@ async fn main() -> ExitCode { async fn run() -> Result { let cli = Cli::parse(); match cli.command { - Some(Command::Install(command)) => { - installer::install(command)?; - Ok(ExitCode::SUCCESS) - } Some(Command::HookForward(command)) => { installer::hook_forward(command).await?; Ok(ExitCode::SUCCESS) diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index 85b15fa7..c8f2700e 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -17,6 +17,7 @@ use toml_edit::{DocumentMut, Item, Table, value}; use crate::config::CodingAgent; use crate::error::CliError; +use crate::installer::{hermes_hooks, hook_forward_command, merge_hermes_config}; /// Where the setup saves its output. #[derive(Debug, Clone, Copy, PartialEq, Eq)] @@ -71,6 +72,10 @@ pub(crate) struct SetupAnswers { /// Currently surfaced by the codex setup branch; reusable by any future agent on the /// OpenAI route family. pub openai_base_url: Option, + /// Path recorded under `[agents.hermes].hooks_path` when hermes is selected. Set by `run` + /// from `hermes_hooks_path_for_scope` so the wizard preview shows the file the launcher + /// will reference. `None` when hermes wasn't selected. + pub hermes_hooks_path: Option, } /// Scans `$PATH` for the supported coding-agent binaries and returns the ones present. @@ -142,6 +147,11 @@ pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { }; let mut agent_table = Table::new(); agent_table["command"] = value(command); + if matches!(agent, CodingAgent::Hermes) + && let Some(path) = answers.hermes_hooks_path.as_deref() + { + agent_table["hooks_path"] = value(path.display().to_string()); + } agents_table.insert(key, Item::Table(agent_table)); } doc["agents"] = Item::Table(agents_table); @@ -361,9 +371,67 @@ pub(crate) fn prompt_user( backends, openinference_endpoint, openai_base_url, + hermes_hooks_path: None, }) } +/// Returns the path the wizard will record under `[agents.hermes].hooks_path` and write hermes +/// hooks to. For `Both` scope, the project path wins (matches the config-merge precedence). For +/// `Global` scope, the home path wins. Returns `None` when hermes is not in the selection. +pub(crate) fn hermes_hooks_path_for_scope( + agents: &[CodingAgent], + scope: ConfigScope, + cwd: &Path, + home: &Path, +) -> Option { + if !agents.contains(&CodingAgent::Hermes) { + return None; + } + match scope { + ConfigScope::Project | ConfigScope::Both => Some(cwd.join(".hermes").join("config.yaml")), + ConfigScope::Global => Some(home.join(".hermes").join("config.yaml")), + } +} + +/// Writes/merges `.hermes/config.yaml` hook config for every scope-applicable location so hermes +/// fires `nemo-flow hook-forward hermes` on every hook event after setup. Idempotent: existing +/// hook entries are preserved and our generated groups are appended only when missing. +/// +/// Returns the list of paths actually written so callers can surface them to the user. +pub(crate) fn install_hermes_hooks( + scope: ConfigScope, + cwd: &Path, + home: &Path, +) -> Result, CliError> { + let generated = hermes_hooks(&hook_forward_command("nemo-flow", CodingAgent::Hermes)); + let mut written = Vec::new(); + for path in hermes_hook_targets(scope, cwd, home) { + let existing = match std::fs::read_to_string(&path) { + Ok(raw) => raw, + Err(error) if error.kind() == std::io::ErrorKind::NotFound => String::new(), + Err(error) => return Err(CliError::Io(error)), + }; + let merged = merge_hermes_config(&existing, generated.clone())?; + if let Some(parent) = path.parent() { + std::fs::create_dir_all(parent)?; + } + std::fs::write(&path, merged)?; + written.push(path); + } + Ok(written) +} + +fn hermes_hook_targets(scope: ConfigScope, cwd: &Path, home: &Path) -> Vec { + let mut targets = Vec::new(); + if matches!(scope, ConfigScope::Project | ConfigScope::Both) { + targets.push(cwd.join(".hermes").join("config.yaml")); + } + if matches!(scope, ConfigScope::Global | ConfigScope::Both) { + targets.push(home.join(".hermes").join("config.yaml")); + } + targets +} + /// Pre-filled wizard defaults read from an existing `config.toml`. When the file is missing or /// unparseable the defaults are all-empty and the wizard behaves like a first-run setup. #[derive(Debug, Clone, Default)] @@ -682,20 +750,31 @@ fn agent_key_and_command(agent: CodingAgent) -> (&'static str, &'static str) { /// `nemo-flow config` asks the full set so users can configure multiple agents at once. pub(crate) async fn run(agent_hint: Option) -> Result<(), CliError> { let detected = detect_installed_agents(); - let answers = prompt_user(&detected, agent_hint)?; - let doc = build_config(&answers); + let mut answers = prompt_user(&detected, agent_hint)?; let cwd = std::env::current_dir()?; let home = home_dir().ok_or_else(|| { CliError::Config("cannot determine home directory (set $HOME or $USERPROFILE)".into()) })?; - let preview_paths = preview_paths(answers.scope, &cwd, &home); + answers.hermes_hooks_path = + hermes_hooks_path_for_scope(&answers.agents, answers.scope, &cwd, &home); + + let doc = build_config(&answers); + let mut preview_paths = preview_paths(answers.scope, &cwd, &home); + preview_paths.extend( + hermes_hook_targets(answers.scope, &cwd, &home) + .into_iter() + .filter(|_| answers.agents.contains(&CodingAgent::Hermes)), + ); if !confirm_summary(&preview_paths, &doc)? { return Err(CliError::Config("setup cancelled — no config saved".into())); } - let written = save_config(&doc, answers.scope, &cwd, &home, agent_hint)?; + let mut written = save_config(&doc, answers.scope, &cwd, &home, agent_hint)?; + if answers.agents.contains(&CodingAgent::Hermes) { + written.extend(install_hermes_hooks(answers.scope, &cwd, &home)?); + } println!(); println!(" ✓ Saved:"); for path in &written { diff --git a/crates/cli/tests/cli_tests.rs b/crates/cli/tests/cli_tests.rs index 9897af41..09dc21fa 100644 --- a/crates/cli/tests/cli_tests.rs +++ b/crates/cli/tests/cli_tests.rs @@ -65,37 +65,6 @@ fn cli_easy_path_invokes_setup_when_no_config_found() { ); } -#[test] -fn cli_install_dry_run_plans_without_writing() { - let temp = tempfile::tempdir().unwrap(); - let output = Command::new(gateway_bin()) - .env("HOME", temp.path()) - .args([ - "install", - "codex", - "--dry-run", - "--print", - "--target", - "both", - "--gateway-url", - "http://127.0.0.1:4040", - "--session-metadata", - r#"{"team":"cli"}"#, - "--plugin-config", - r#"{"components":[]}"#, - "--gateway-mode", - "required", - ]) - .output() - .unwrap(); - - assert!(output.status.success()); - let stdout = String::from_utf8_lossy(&output.stdout); - assert!(stdout.contains("Dry run: would install")); - assert!(stdout.contains("hook-forward codex")); - assert!(!temp.path().join(".codex/hooks.json").exists()); -} - #[test] fn cli_run_dry_run_resolves_config_and_command() { let temp = tempfile::tempdir().unwrap(); diff --git a/crates/cli/tests/coverage/installer_tests.rs b/crates/cli/tests/coverage/installer_tests.rs index f85e085f..9ad05aa3 100644 --- a/crates/cli/tests/coverage/installer_tests.rs +++ b/crates/cli/tests/coverage/installer_tests.rs @@ -3,146 +3,6 @@ use super::*; -fn command(agent: CodingAgent, root: &Path) -> InstallCommand { - InstallCommand { - agent, - scope: InstallScope::User, - target: InstallTarget::Both, - gateway_url: "http://127.0.0.1:4040".into(), - atif_dir: Some(root.join("atif")), - openinference_endpoint: Some("http://otel:4318/v1/traces".into()), - profile: Some("default".into()), - session_metadata: Some(r#"{"team":"agent-observability"}"#.into()), - plugin_config: Some(r#"{"components":[]}"#.into()), - gateway_mode: Some(GatewayMode::Required), - dry_run: false, - print: false, - home_dir: Some(root.to_path_buf()), - project_dir: None, - } -} - -fn project_command(agent: CodingAgent, root: &Path) -> InstallCommand { - InstallCommand { - scope: InstallScope::Project, - project_dir: Some(root.to_path_buf()), - ..command(agent, root) - } -} - -#[test] -fn generates_claude_install_file() { - let temp = tempfile::tempdir().unwrap(); - let files = planned_files(&command(CodingAgent::ClaudeCode, temp.path())).unwrap(); - assert_eq!(files.len(), 1); - assert!(files[0].path.ends_with(".claude/settings.json")); - let json: Value = serde_json::from_str(&files[0].contents).unwrap(); - assert!(json["hooks"]["SessionStart"].is_array()); - assert!(json["hooks"]["UserPromptSubmit"].is_array()); - assert!(json["hooks"]["SessionEnd"].is_array()); - assert!(json["hooks"]["Stop"].is_array()); - assert!(json["hooks"]["Notification"].is_array()); - assert!( - json["hooks"]["PermissionRequest"].is_array(), - "PermissionRequest must be injected (Claude + Codex both support it)" - ); - assert!(json["hooks"]["PostCompact"].is_array()); - assert!( - json["hooks"]["AfterAgentResponse"].is_null(), - "AfterAgentResponse is not in Claude's hook whitelist; it must not be injected (would cause Claude to reject the entire hooks file)" - ); - assert!( - json["hooks"]["AfterAgentThought"].is_null(), - "AfterAgentThought is not in Claude's hook whitelist; it must not be injected" - ); - assert!(json["hooks"]["SessionEnd"][0].get("matcher").is_none()); - assert!( - json["hooks"]["PreToolUse"][0]["hooks"][0]["command"] - .as_str() - .unwrap() - .contains("hook-forward claude") - ); -} - -#[test] -fn generates_codex_config_and_hooks() { - let temp = tempfile::tempdir().unwrap(); - let files = planned_files(&command(CodingAgent::Codex, temp.path())).unwrap(); - assert_eq!(files.len(), 2); - assert!(files[0].contents.contains("codex_hooks = true")); - let json: Value = serde_json::from_str(&files[1].contents).unwrap(); - assert!(json["hooks"]["Stop"].is_array()); - assert!(json["hooks"]["UserPromptSubmit"].is_array()); - assert!(json["hooks"]["SessionStart"].is_array()); - assert!(json["hooks"]["SessionEnd"].is_array()); - assert!(json["hooks"]["Notification"].is_array()); - assert!( - json["hooks"]["PermissionRequest"].is_array(), - "PermissionRequest must be injected for Codex" - ); - assert!(json["hooks"]["PostCompact"].is_array()); - assert!( - json["hooks"]["AfterAgentResponse"].is_null(), - "AfterAgentResponse must not be injected — not part of the supported event surface" - ); - assert!( - json["hooks"]["AfterAgentThought"].is_null(), - "AfterAgentThought must not be injected — not part of the supported event surface" - ); - assert!(json["hooks"]["Stop"][0].get("matcher").is_none()); - assert!( - json["hooks"]["PreToolUse"][0]["hooks"][0]["command"] - .as_str() - .unwrap() - .contains("hook-forward codex") - ); -} - -#[test] -fn generates_cursor_hooks() { - let temp = tempfile::tempdir().unwrap(); - let files = planned_files(&command(CodingAgent::Cursor, temp.path())).unwrap(); - assert_eq!(files.len(), 1); - let json: Value = serde_json::from_str(&files[0].contents).unwrap(); - assert!(json["hooks"]["beforeShellExecution"].is_array()); - assert!(json["hooks"]["beforeSubmitPrompt"].is_array()); - assert!(json["hooks"]["afterAgentResponse"].is_array()); - assert!(json["hooks"]["afterAgentThought"].is_array()); - assert!( - json["hooks"]["afterAgentThought"][0] - .get("matcher") - .is_none() - ); - assert!( - json["hooks"]["beforeShellExecution"][0]["hooks"][0]["command"] - .as_str() - .unwrap() - .contains("hook-forward cursor") - ); -} - -#[test] -fn generates_hermes_shell_hook_config() { - let temp = tempfile::tempdir().unwrap(); - let files = planned_files(&command(CodingAgent::Hermes, temp.path())).unwrap(); - assert_eq!(files.len(), 1); - assert!(files[0].path.ends_with(".hermes/config.yaml")); - let yaml: Value = serde_yaml::from_str(&files[0].contents).unwrap(); - assert!(yaml["hooks"]["on_session_start"].is_array()); - assert!(yaml["hooks"]["pre_llm_call"].is_array()); - assert!(yaml["hooks"]["post_llm_call"].is_array()); - assert!(yaml["hooks"]["subagent_start"].is_array()); - assert!(yaml["hooks"]["pre_api_request"].is_array()); - assert!(yaml["hooks"]["post_api_request"].is_array()); - assert!(yaml["hooks"]["subagent_stop"].is_array()); - assert!( - yaml["hooks"]["pre_tool_call"][0]["command"] - .as_str() - .unwrap() - .contains("hook-forward hermes") - ); -} - #[test] fn hermes_config_merge_preserves_existing_yaml() { let existing = r#" @@ -220,84 +80,6 @@ fn merge_hooks_is_idempotent_and_preserves_existing_entries() { assert_eq!(twice["hooks"]["Stop"].as_array().unwrap().len(), 2); } -#[test] -fn project_install_uses_project_dir_and_preserves_codex_toml() { - let temp = tempfile::tempdir().unwrap(); - let codex_dir = temp.path().join(".codex"); - std::fs::create_dir_all(&codex_dir).unwrap(); - std::fs::write( - codex_dir.join("config.toml"), - "[features]\nother = true\n[model_providers.openai]\nbase_url = \"http://old\"\n", - ) - .unwrap(); - - let files = planned_files(&project_command(CodingAgent::Codex, temp.path())).unwrap(); - - assert!(files[0].path.starts_with(temp.path())); - assert!(files[0].contents.contains("other = true")); - assert!(files[0].contents.contains("codex_hooks = true")); - assert!(files[0].contents.contains("[model_providers.openai]")); -} - -#[test] -fn install_writes_file_and_backs_up_existing_config() { - let temp = tempfile::tempdir().unwrap(); - let claude_dir = temp.path().join(".claude"); - std::fs::create_dir_all(&claude_dir).unwrap(); - let settings = claude_dir.join("settings.json"); - std::fs::write(&settings, r#"{"hooks":{"Stop":[]}}"#).unwrap(); - - install(command(CodingAgent::ClaudeCode, temp.path())).unwrap(); - - let installed = std::fs::read_to_string(&settings).unwrap(); - assert!(installed.contains("hook-forward claude")); - let backups: Vec<_> = std::fs::read_dir(&claude_dir) - .unwrap() - .map(|entry| entry.unwrap().file_name().to_string_lossy().into_owned()) - .filter(|name| name.starts_with("settings.json.bak.")) - .collect(); - assert_eq!(backups.len(), 1); -} - -#[test] -fn install_prints_target_notes_for_non_claude_agents() { - for agent in [CodingAgent::Codex, CodingAgent::Cursor, CodingAgent::Hermes] { - let temp = tempfile::tempdir().unwrap(); - let mut command = command(agent, temp.path()); - command.target = InstallTarget::Both; - - install(command).unwrap(); - } -} - -#[test] -fn target_note_noops_for_unmatched_agent_target_pairs() { - print_target_note(CodingAgent::Codex, InstallTarget::Cli); -} - -#[test] -fn install_dry_run_does_not_write_files() { - let temp = tempfile::tempdir().unwrap(); - let mut command = command(CodingAgent::Cursor, temp.path()); - command.dry_run = true; - command.print = true; - - install(command).unwrap(); - - assert!(!temp.path().join(".cursor/hooks.json").exists()); -} - -#[test] -fn invalid_json_config_is_rejected_before_planning() { - let temp = tempfile::tempdir().unwrap(); - let mut command = command(CodingAgent::Codex, temp.path()); - command.session_metadata = Some("not-json".into()); - - let error = install(command).unwrap_err().to_string(); - - assert!(error.contains("invalid session metadata")); -} - #[test] fn merge_hooks_rejects_malformed_shapes() { assert!(merge_hooks(json!([]), codex_hooks("cmd")).is_err()); @@ -306,33 +88,8 @@ fn merge_hooks_rejects_malformed_shapes() { assert!(merge_hooks(json!({}), json!({ "hooks": [] })).is_err()); } -#[test] -fn invalid_existing_files_are_reported() { - let temp = tempfile::tempdir().unwrap(); - let cursor_dir = temp.path().join(".cursor"); - std::fs::create_dir_all(&cursor_dir).unwrap(); - std::fs::write(cursor_dir.join("hooks.json"), "not-json").unwrap(); - - let error = planned_files(&command(CodingAgent::Cursor, temp.path())) - .unwrap_err() - .to_string(); - - assert!(error.contains("invalid JSON")); - - let codex_dir = temp.path().join(".codex"); - std::fs::create_dir_all(&codex_dir).unwrap(); - std::fs::write(codex_dir.join("config.toml"), "not = [valid").unwrap(); - let error = planned_files(&command(CodingAgent::Codex, temp.path())) - .unwrap_err() - .to_string(); - assert!(error.contains("invalid TOML")); -} - #[test] fn helper_formatting_and_headers_cover_optional_paths() { - assert_eq!(shell_quote("plain/arg-1"), "plain/arg-1"); - assert_eq!(shell_quote("needs space"), "'needs space'"); - assert_eq!(shell_quote("can't"), "'can'\\''t'"); assert!(event_matches_tools("PermissionRequest")); assert!(!event_matches_tools("SessionStart")); @@ -387,7 +144,8 @@ fn generated_hook_dispatch_covers_all_agents() { #[test] fn packaged_hook_configs_are_valid_json() { - let root = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("../../integrations/coding-agents"); + let root = std::path::PathBuf::from(env!("CARGO_MANIFEST_DIR")) + .join("../../integrations/coding-agents"); for path in [ root.join("claude-code/hooks/hooks.json"), root.join("codex/hooks/hooks.json"), diff --git a/crates/cli/tests/coverage/launcher_tests.rs b/crates/cli/tests/coverage/launcher_tests.rs index 9a3f46c1..fb1b6627 100644 --- a/crates/cli/tests/coverage/launcher_tests.rs +++ b/crates/cli/tests/coverage/launcher_tests.rs @@ -43,6 +43,7 @@ fn uses_configured_command_when_no_argv_is_supplied() { let agents = AgentConfigs { codex: AgentCommandConfig { command: Some("codex --full-auto".into()), + hooks_path: None, }, ..AgentConfigs::default() }; @@ -71,6 +72,7 @@ fn uses_configured_hermes_command_when_no_argv_is_supplied() { let agents = AgentConfigs { hermes: AgentCommandConfig { command: Some("hermes --yolo chat".into()), + hooks_path: None, }, ..AgentConfigs::default() }; @@ -322,7 +324,7 @@ fn prepares_hermes_hook_environment() { .iter() .any(|(name, _)| name == "HERMES_ACCEPT_HOOKS") ); - assert!(prepared.notes[0].contains("approved hooks")); + assert!(prepared.notes[0].contains("nemo-flow config hermes")); } #[test] diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index 6f364d6c..3d55940a 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -2,10 +2,15 @@ // SPDX-License-Identifier: Apache-2.0 use super::*; -use std::os::unix::fs::PermissionsExt; +// Stub-binary detection relies on the Unix executable bit. Windows-side agent presence checks +// use a different mechanism (e.g. `.exe` extension matching), so this lookup test is gated to +// Unix to keep cross-platform CI green; covering the Windows code path is left to a separate +// test once the launcher grows real Windows support. +#[cfg(unix)] #[test] fn detect_installed_agents_finds_binaries_on_path() { + use std::os::unix::fs::PermissionsExt; let temp = tempfile::tempdir().unwrap(); // Drop stub binaries for two of the four supported agents — confirming detection picks up // only the ones present and ignores the others. @@ -46,6 +51,7 @@ fn build_config_emits_observability_section_when_atif_selected() { backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -64,6 +70,7 @@ fn build_config_emits_export_section_when_openinference_selected() { backends: vec![ObservabilityBackend::OpenInference], openinference_endpoint: Some("http://localhost:6006/v1/traces".into()), openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -81,6 +88,7 @@ fn build_config_skips_empty_sections_when_no_backends_selected() { backends: vec![], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -99,6 +107,7 @@ fn build_config_emits_agents_block_with_user_facing_keys() { backends: vec![], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -119,6 +128,7 @@ fn build_config_writes_upstream_block_for_custom_openai_base_url() { backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, openai_base_url: Some("https://litellm.internal/v1".into()), + hermes_hooks_path: None, }; let rendered = build_config(&answers).to_string(); assert!(rendered.contains("[upstream]")); @@ -133,6 +143,7 @@ fn build_config_omits_upstream_block_when_openai_base_url_is_none() { backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let rendered = build_config(&answers).to_string(); assert!(!rendered.contains("[upstream]")); @@ -146,6 +157,7 @@ fn save_config_writes_project_scope_to_workspace_dir() { backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); let temp = tempfile::tempdir().unwrap(); @@ -190,6 +202,7 @@ command = "codex --full-auto" backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); save_config( @@ -234,6 +247,7 @@ fn save_config_writes_both_scopes_when_both_selected() { backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, openai_base_url: None, + hermes_hooks_path: None, }; let doc = build_config(&answers); let cwd = tempfile::tempdir().unwrap(); @@ -245,3 +259,44 @@ fn save_config_writes_both_scopes_when_both_selected() { assert!(written.iter().any(|p| p.starts_with(cwd.path()))); assert!(written.iter().any(|p| p.starts_with(home.path()))); } + +#[test] +fn build_config_emits_hooks_path_for_hermes_when_set() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![CodingAgent::Hermes], + backends: vec![], + openinference_endpoint: None, + openai_base_url: None, + hermes_hooks_path: Some(std::path::PathBuf::from("/tmp/proj/.hermes/config.yaml")), + }; + let rendered = build_config(&answers).to_string(); + assert!(rendered.contains("[agents.hermes]")); + assert!(rendered.contains(r#"hooks_path = "/tmp/proj/.hermes/config.yaml""#)); +} + +#[test] +fn install_hermes_hooks_writes_yaml_and_merges_existing() { + let cwd = tempfile::tempdir().unwrap(); + let home = tempfile::tempdir().unwrap(); + // Seed an existing hermes config so we can verify the merge preserves user state. + let project_hermes = cwd.path().join(".hermes"); + std::fs::create_dir_all(&project_hermes).unwrap(); + std::fs::write( + project_hermes.join("config.yaml"), + "model:\n provider: auto\n", + ) + .unwrap(); + + let written = install_hermes_hooks(ConfigScope::Both, cwd.path(), home.path()).unwrap(); + + assert_eq!(written.len(), 2); + let project_yaml = std::fs::read_to_string(cwd.path().join(".hermes/config.yaml")).unwrap(); + assert!(project_yaml.contains("nemo-flow hook-forward hermes")); + assert!( + project_yaml.contains("provider: auto"), + "existing model block must survive merge" + ); + let home_yaml = std::fs::read_to_string(home.path().join(".hermes/config.yaml")).unwrap(); + assert!(home_yaml.contains("nemo-flow hook-forward hermes")); +} diff --git a/docs/integrate-frameworks/coding-agent-claude-code.md b/docs/integrate-frameworks/coding-agent-claude-code.md index aa155683..13bdfa69 100644 --- a/docs/integrate-frameworks/coding-agent-claude-code.md +++ b/docs/integrate-frameworks/coding-agent-claude-code.md @@ -60,20 +60,10 @@ command = "claude" Then run `nemo-flow run --agent claude` to use the configured command. User config takes priority over project and global config. -## Persistent Install +## Standalone Gateway -Use persistent hooks only when you want Claude Code configured outside the -wrapper: - -```bash -nemo-flow install claude-code \ - --scope user \ - --target cli \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif -``` - -Then start the gateway manually: +Use the long-running gateway only when you want Claude Code running outside the +wrapper (e.g., already configured by an IDE): ```bash NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 @@ -87,7 +77,10 @@ claude ``` The gateway forwards Anthropic `/v1/messages`, `/v1/messages/count_tokens`, and -model routes without rewriting provider JSON. +model routes without rewriting provider JSON. Hook events (tool calls, session +markers) are only captured when running through `nemo-flow claude` or +`nemo-flow run --agent claude`, which inject ephemeral hooks into the launched +process. ## Captured Events diff --git a/docs/integrate-frameworks/coding-agent-codex.md b/docs/integrate-frameworks/coding-agent-codex.md index 6b6a68e4..5a86eff0 100644 --- a/docs/integrate-frameworks/coding-agent-codex.md +++ b/docs/integrate-frameworks/coding-agent-codex.md @@ -68,20 +68,17 @@ command = "codex" Then run `nemo-flow run --agent codex` to use the configured command. User config takes priority over project and global config. -## Persistent Install +## Standalone Gateway -Use persistent hooks only when you want Codex configured outside the wrapper: +Use the long-running gateway only when you want Codex running outside the +wrapper: ```bash -nemo-flow install codex \ - --scope user \ - --target both \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif +NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 ``` -Then start the gateway manually and configure local Codex to use a gateway -provider alias instead of overriding the reserved built-in `openai` provider: +Then configure local Codex to use a gateway provider alias instead of +overriding the reserved built-in `openai` provider: ```toml model_provider = "nemo-flow-openai" diff --git a/docs/integrate-frameworks/coding-agent-cursor.md b/docs/integrate-frameworks/coding-agent-cursor.md index 4bfd03a8..a258cd45 100644 --- a/docs/integrate-frameworks/coding-agent-cursor.md +++ b/docs/integrate-frameworks/coding-agent-cursor.md @@ -66,22 +66,19 @@ patch_restore_hooks = true Then run `nemo-flow run --agent cursor` to use the configured command. User config takes priority over project and global config. -## Persistent Install +## Standalone Gateway -Use persistent hooks only when you want Cursor configured outside the wrapper: +Use the long-running gateway only when you want Cursor running outside the +wrapper (e.g., the Cursor GUI). Start the gateway manually: ```bash -nemo-flow install cursor \ - --scope project \ - --target gui \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif +NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 ``` -Then start the gateway manually and point Cursor provider traffic at -`http://127.0.0.1:4040` where Cursor exposes provider base URL configuration. -Hook-only Cursor mode observes agent and tool lifecycle but cannot provide -complete LLM lifecycle. Missing LLM spans are expected when Cursor sends model +Then point Cursor provider traffic at `http://127.0.0.1:4040` wherever Cursor +exposes provider base URL configuration. Without the wrapper, hook events are +not captured — Cursor GUI mode only emits LLM lifecycle as traffic passes +through the gateway. Missing LLM spans are expected when Cursor sends model traffic directly to the provider or through a remote service. ## Captured Events diff --git a/docs/integrate-frameworks/coding-agent-gateway.md b/docs/integrate-frameworks/coding-agent-gateway.md index 1c631ebf..f693b975 100644 --- a/docs/integrate-frameworks/coding-agent-gateway.md +++ b/docs/integrate-frameworks/coding-agent-gateway.md @@ -213,26 +213,20 @@ Cursor hook-only mode observes agent, subagent, and tool lifecycle. To observe Cursor LLM lifecycle completely, configure Cursor model traffic to use the gateway. -## Persistent Install +## Hook Forwarding -The repository also includes installable integration packages under -`integrations/coding-agents/`. Use `install` when you want stable hook config -instead of the transparent wrapper. +Hooks generated by the wrapper (Claude/Codex/Cursor ephemeral, Hermes via +setup) invoke `nemo-flow hook-forward ` from stdin. Inside the wrapper +the gateway URL comes from `NEMO_FLOW_GATEWAY_URL` injected on every run; +outside the wrapper (Hermes standalone, IDE-launched Claude/Codex) the hook +command falls back to its embedded `--gateway-url`. -```bash -nemo-flow install claude-code --scope user --target cli --gateway-url http://127.0.0.1:4040 -nemo-flow install codex --scope user --target both --gateway-url http://127.0.0.1:4040 -nemo-flow install cursor --scope project --target gui --gateway-url http://127.0.0.1:4040 -nemo-flow install hermes --scope user --target cli --gateway-url http://127.0.0.1:4040 -``` +`hook-forward` reads the canonical hook payload from standard input, sends it +to the matching endpoint, and prints the endpoint response. It fails open by +default so observability outages do not block the coding agent. Add +`--fail-closed` only when policy requires hook delivery to block the agent. -Use `--dry-run` to see which files would be changed. Use `--print` to print the -merged file contents. Existing config files are backed up before the installer -writes replacement files, and generated hook entries are appended only when the -same NeMo Flow entry is not already present. - -Common install options become hook-forwarding command arguments and gateway -headers: +Optional flags map to gateway headers: - `--atif-dir` sets `x-nemo-flow-atif-dir`. - `--openinference-endpoint` sets `x-nemo-flow-openinference-endpoint`. @@ -241,23 +235,6 @@ headers: - `--profile` sets `x-nemo-flow-config-profile`. - `--gateway-mode` sets `x-nemo-flow-gateway-mode`. -Static integration bundles rely on the wrapper-provided -`NEMO_FLOW_GATEWAY_URL` and run: - -```bash -nemo-flow hook-forward -``` - -Persistent installer output embeds `--gateway-url` and any selected export or -session options directly in the generated hook command. - -`hook-forward` reads the canonical hook payload from standard input, sends it to -the matching endpoint, and prints the endpoint response. In transparent runs it -discovers the gateway through `NEMO_FLOW_GATEWAY_URL`; in persistent installs -you can still pass `--gateway-url`. It fails open by default so observability -outages do not block the coding agent. Add `--fail-closed` only when policy -requires hook delivery to block the agent. - ## Agent Guides Use the per-agent guide for end-to-end setup, smoke tests, and GUI or @@ -268,6 +245,6 @@ application-mode caveats. - [Cursor Gateway Guide](coding-agent-cursor.md) - [Hermes Gateway Guide](coding-agent-hermes.md) -Each guide covers transparent run setup, persistent installation, gateway -routing, hook smoke tests, ATIF export verification on session end, and -troubleshooting missing LLM lifecycle data. +Each guide covers transparent run setup, gateway routing, hook smoke tests, +ATIF export verification on session end, and troubleshooting missing LLM +lifecycle data. diff --git a/docs/integrate-frameworks/coding-agent-hermes.md b/docs/integrate-frameworks/coding-agent-hermes.md index 342b91e2..09da2a52 100644 --- a/docs/integrate-frameworks/coding-agent-hermes.md +++ b/docs/integrate-frameworks/coding-agent-hermes.md @@ -67,38 +67,31 @@ command = "hermes" Then run `nemo-flow run --agent hermes` to use the configured command. User config takes priority over project and global config. -## Persistent Install +## Hermes Hook Setup -Use persistent hooks to merge NeMo Flow hook commands into -`~/.hermes/config.yaml` or the project `.hermes/config.yaml`: +Unlike the other agents, Hermes reads hooks from `.hermes/config.yaml`. The +setup wizard writes that file for you when you select hermes — running +`nemo-flow config` (or `nemo-flow config hermes` to scope to one agent) merges +NeMo Flow hook commands into the YAML, preserving any existing config, and +records the path under `[agents.hermes].hooks_path` in `.nemo-flow/config.toml`. -```bash -nemo-flow install hermes \ - --scope user \ - --target cli \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif -``` - -The installer preserves existing YAML config, appends missing NeMo Flow hook -entries, and backs up the file before writing. The generated Hermes hooks cover -`on_session_start`, `on_session_end`, `on_session_finalize`, -`on_session_reset`, `pre_llm_call`, `post_llm_call`, `pre_tool_call`, -`post_tool_call`, `subagent_start`, and `subagent_stop`. +The generated Hermes hooks cover `on_session_start`, `on_session_end`, +`on_session_finalize`, `on_session_reset`, `pre_llm_call`, `post_llm_call`, +`pre_tool_call`, `post_tool_call`, `subagent_start`, and `subagent_stop`. -Hermes hook forwarding prefers `NEMO_FLOW_GATEWAY_URL` when it is set, even if -the installed command also includes `--gateway-url`. This lets persistent hook -config work with `nemo-flow run`, where each run uses a dynamic local -port. Without `NEMO_FLOW_GATEWAY_URL`, the installed `--gateway-url` is used. +Hermes hook forwarding prefers `NEMO_FLOW_GATEWAY_URL` when set (this is what +`nemo-flow hermes` injects on every run). When launched outside the wrapper — +e.g., bare `hermes` against a long-running gateway — the hook command falls +back to `--gateway-url http://127.0.0.1:4040`. -Then start the gateway manually for persistent mode: +For standalone gateway mode, start the daemon manually: ```bash NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 ``` -Point Hermes provider traffic at `http://127.0.0.1:4040` for any provider mode -that exposes a local OpenAI-compatible or Anthropic-compatible base URL. +Then point Hermes provider traffic at `http://127.0.0.1:4040` for any provider +mode that exposes a local OpenAI-compatible or Anthropic-compatible base URL. ## Smoke Test diff --git a/integrations/coding-agents/README.md b/integrations/coding-agents/README.md index e2409082..c4b3635f 100644 --- a/integrations/coding-agents/README.md +++ b/integrations/coding-agents/README.md @@ -29,9 +29,9 @@ environment variables, or shared TOML config. for Codex LLM gateway routing. - `cursor/` installs a Cursor `.cursor/hooks.json` bundle targeting `POST /hooks/cursor`. -- Hermes does not require a static bundle in this directory. Use - `nemo-flow install hermes` to merge hook commands into - `.hermes/config.yaml`. +- Hermes does not require a static bundle in this directory. The setup wizard + (`nemo-flow config`) merges hook commands into `.hermes/config.yaml` when + hermes is selected. - `hermes/` contains a native Hermes Python plugin prototype that writes ATIF from Hermes plugin middleware without running the gateway HTTP process. @@ -55,8 +55,9 @@ command name. Use `--dry-run --print` to inspect generated config without launching. Hermes transparent runs export the dynamic `NEMO_FLOW_GATEWAY_URL`, but Hermes -hooks still need to be installed or approved in Hermes configuration before -they can call the gateway. +hooks must already be present in `.hermes/config.yaml` before they can call the +gateway. The setup wizard (`nemo-flow config`) writes that file for you when +you select hermes. Shared TOML config is loaded from `/etc/nemo-flow/config.toml`, then nearest project `.nemo-flow/config.toml`, then @@ -78,50 +79,18 @@ command = "codex" command = "hermes" ``` -## Persistent Setup +## Hook Forwarding -Use `install` only when you want persistent hook configuration: +Hooks call `nemo-flow hook-forward ` with the canonical hook payload on +stdin. The wrapper injects `NEMO_FLOW_GATEWAY_URL` so the same hook command +reaches the ephemeral per-run gateway; hermes hooks fall back to an embedded +`--gateway-url` when running outside the wrapper. -```bash -nemo-flow install claude-code --scope user --target cli --gateway-url http://127.0.0.1:4040 -nemo-flow install codex --scope user --target both --gateway-url http://127.0.0.1:4040 -nemo-flow install cursor --scope project --target gui --gateway-url http://127.0.0.1:4040 -nemo-flow install hermes --scope user --target cli --gateway-url http://127.0.0.1:4040 -``` - -Inspect generated changes before writing: - -```bash -nemo-flow install codex \ - --scope user \ - --target both \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif \ - --dry-run \ - --print -``` - -The installer backs up existing config files, merges only NeMo Flow hook -entries, and avoids adding duplicate NeMo Flow entries on repeated runs. In -persistent mode you start the gateway yourself and pass `--gateway-url` or set -`NEMO_FLOW_GATEWAY_URL` for hook forwarding. - -## Common Options - -Static bundles rely on `NEMO_FLOW_GATEWAY_URL` from `nemo-flow run` and -call: - -```bash -nemo-flow hook-forward -``` - -Persistent installer output includes `--gateway-url` and any selected export or -session options in the generated command. - -`hook-forward` reads the canonical hook JSON from standard input, forwards it to -the matching gateway endpoint, and prints the vendor-specific hook response. +`hook-forward` prints the vendor-specific response and fails open by default +(observability outages do not block the coding agent). Add `--fail-closed` to +generated hook commands when policy requires hook delivery to block the agent. -Useful wrapper and install options: +Useful wrapper options: - `--atif-dir ` writes ATIF trajectories on session end. - `--openinference-endpoint ` exports OpenInference traces. @@ -131,8 +100,6 @@ Useful wrapper and install options: - `--profile ` records a configuration profile in session metadata. - `--gateway-mode hook-only|passthrough|required` records the expected gateway behavior in session metadata. -- `--fail-closed` can be added to generated hook commands when the agent should - block on hook delivery failures. The default is fail-open. ## LLM Gateway @@ -150,7 +117,7 @@ The gateway exposes these passthrough routes: - `GET /v1/models` Transparent runs configure provider routing automatically where the launched -agent supports local routing. Persistent installs require you to point the +agent supports local routing. Standalone gateway mode requires you to point the agent's provider base URL at the gateway manually. ## Verify Export diff --git a/integrations/coding-agents/claude-code/README.md b/integrations/coding-agents/claude-code/README.md index 19fa7d52..9b7f4e07 100644 --- a/integrations/coding-agents/claude-code/README.md +++ b/integrations/coding-agents/claude-code/README.md @@ -72,20 +72,10 @@ Then run: nemo-flow run --agent claude ``` -## Persistent Setup +## Standalone Gateway -Use persistent hooks only when you do not want to launch Claude Code through the -wrapper: - -```bash -nemo-flow install claude-code \ - --scope user \ - --target cli \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif -``` - -Start the gateway in one terminal: +Use the long-running gateway only when you do not want to launch Claude Code +through the wrapper. Start the gateway in one terminal: ```bash NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 @@ -98,6 +88,9 @@ export ANTHROPIC_BASE_URL=http://127.0.0.1:4040 claude ``` +Hook events (tool calls, session markers) are only captured when running +through the wrapper, which injects ephemeral hooks per-run. + ## Verify Run a Claude Code session that starts, uses one simple tool, and ends. Confirm diff --git a/integrations/coding-agents/codex/README.md b/integrations/coding-agents/codex/README.md index a8f89364..6c76c01e 100644 --- a/integrations/coding-agents/codex/README.md +++ b/integrations/coding-agents/codex/README.md @@ -81,21 +81,17 @@ Then run: nemo-flow run --agent codex ``` -## Persistent Setup +## Standalone Gateway -Use persistent hooks only when you do not want to launch Codex through the -wrapper: +Use the long-running gateway only when you do not want to launch Codex through +the wrapper. Start the gateway manually: ```bash -nemo-flow install codex \ - --scope user \ - --target both \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif +NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 ``` -Then start the gateway manually and configure local Codex to use a gateway -provider alias instead of overriding the reserved built-in `openai` provider: +Then configure local Codex to use a gateway provider alias instead of +overriding the reserved built-in `openai` provider: ```toml model_provider = "nemo-flow-openai" diff --git a/integrations/coding-agents/cursor/README.md b/integrations/coding-agents/cursor/README.md index ddcd7c94..967aa9d2 100644 --- a/integrations/coding-agents/cursor/README.md +++ b/integrations/coding-agents/cursor/README.md @@ -77,21 +77,18 @@ Then run: nemo-flow run --agent cursor ``` -## Persistent Setup +## Standalone Gateway -Use persistent hooks only when you do not want to launch Cursor through the -wrapper: +Use the long-running gateway only when you do not want to launch Cursor +through the wrapper (e.g., the Cursor GUI). Start the gateway manually: ```bash -nemo-flow install cursor \ - --scope project \ - --target gui \ - --gateway-url http://127.0.0.1:4040 \ - --atif-dir .nemo-flow/atif +NEMO_FLOW_ATIF_DIR=.nemo-flow/atif nemo-flow --bind 127.0.0.1:4040 ``` -Then start the gateway manually and point Cursor provider traffic at -`http://127.0.0.1:4040` where Cursor exposes provider base URL configuration. +Then point Cursor provider traffic at `http://127.0.0.1:4040` where Cursor +exposes provider base URL configuration. Hook events are only captured when +running through the wrapper. ## Verify From 01c49bdbf76dcac44932dba2dece7425fe2e871c Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 02:56:04 -0700 Subject: [PATCH 04/15] fix(cli): address CodeRabbit + windows CI feedback - gate stub-binary detect test on cfg(unix); use pure detect fn so tests don't mutate PATH and race in parallel runs. - gateway: only strip ChatGPT-Plus JWT when OPENAI_API_KEY is set; preserves auth for users pointing at non-openai upstreams. - doctor: PID-suffixed write probe with create_new so we can't ever overwrite a real user file; footer says 'No failing checks' so warns aren't masked by 'All checks passed'. - server: OS-neutral remediation text in AddrInUse error (Unix + Windows examples). - setup: trim and reject blank URL input before persisting config. Signed-off-by: Ajay Thorve --- crates/cli/src/doctor.rs | 16 +++++++++++--- crates/cli/src/gateway.rs | 25 ++++++++++++++++++---- crates/cli/src/server.rs | 3 ++- crates/cli/src/setup.rs | 9 +++++--- crates/cli/tests/coverage/doctor_tests.rs | 4 ++-- crates/cli/tests/coverage/gateway_tests.rs | 23 +++++++++++++++++--- crates/cli/tests/coverage/setup_tests.rs | 21 ++++-------------- 7 files changed, 68 insertions(+), 33 deletions(-) diff --git a/crates/cli/src/doctor.rs b/crates/cli/src/doctor.rs index 8e0bbaa2..cf1a4794 100644 --- a/crates/cli/src/doctor.rs +++ b/crates/cli/src/doctor.rs @@ -273,9 +273,16 @@ async fn collect_observability(gateway: &GatewayConfig) -> Vec { } fn check_dir_writable(dir: &std::path::Path) -> Result<(), std::io::Error> { + use std::fs::OpenOptions; std::fs::create_dir_all(dir)?; - let probe = dir.join(".nemo-flow-write-probe"); - std::fs::write(&probe, b"")?; + // PID-suffixed name + create_new=true so we can never overwrite a real user file even if + // they happen to have a `.nemo-flow-write-probe` of their own. The probe is removed + // immediately; the file just witnesses that we have write access here. + let probe = dir.join(format!(".nemo-flow-write-probe-{}", std::process::id())); + OpenOptions::new() + .write(true) + .create_new(true) + .open(&probe)?; std::fs::remove_file(&probe).ok(); Ok(()) } @@ -451,7 +458,10 @@ pub(crate) fn format_human(report: &DoctorReport) -> String { out.push('\n'); if exit_code(report) == 0 { - out.push_str(" All checks passed.\n"); + // Don't say "All checks passed" — `Warn` results still map to exit code 0, so a clean + // exit just means nothing is failing, not that everything is green. This wording keeps + // the footer accurate when the report carries warnings. + out.push_str(" No failing checks.\n"); } else { out.push_str(" Some checks FAILED; see details above.\n"); } diff --git a/crates/cli/src/gateway.rs b/crates/cli/src/gateway.rs index f371f99e..ee0fd58a 100644 --- a/crates/cli/src/gateway.rs +++ b/crates/cli/src/gateway.rs @@ -549,7 +549,15 @@ async fn forward_upstream_request( headers: &HeaderMap, route: ProviderRoute, ) -> Result { - let sanitized = strip_chatgpt_oauth_for_openai_route(headers, route); + // Only strip the inbound JWT when we actually have a replacement key to inject. Without one + // the upstream just receives no auth and 401s, which is no better than letting it reject the + // JWT itself — and stripping silently can break setups that point the gateway at an upstream + // that happens to accept the ChatGPT-Plus token. + let has_openai_env = std::env::var("OPENAI_API_KEY") + .ok() + .filter(|v| !v.is_empty()) + .is_some(); + let sanitized = strip_chatgpt_oauth_for_openai_route(headers, route, has_openai_env); let mut upstream = http.request(method.clone(), url).body(body_bytes.clone()); for (name, value) in &sanitized { if should_forward_request_header(name) { @@ -568,13 +576,18 @@ async fn forward_upstream_request( // sees a valid bearer token. Hermes-style clients that send a real `sk-...` API key are not // affected — the JWT detector only triggers on `Bearer eyJ...` (base64 JSON header). Tracks // NMF-86. -fn strip_chatgpt_oauth_for_openai_route(headers: &HeaderMap, route: ProviderRoute) -> HeaderMap { +fn strip_chatgpt_oauth_for_openai_route( + headers: &HeaderMap, + route: ProviderRoute, + has_replacement_key: bool, +) -> HeaderMap { if !matches!( route, ProviderRoute::OpenAiResponses | ProviderRoute::OpenAiChatCompletions | ProviderRoute::OpenAiModels - ) { + ) || !has_replacement_key + { return headers.clone(); } let mut out = headers.clone(); @@ -714,7 +727,11 @@ pub(crate) async fn models( .map(|p| p.as_str()) .unwrap_or(parts.uri.path()), ); - let sanitized = strip_chatgpt_oauth_for_openai_route(&parts.headers, provider); + let has_openai_env = std::env::var("OPENAI_API_KEY") + .ok() + .filter(|v| !v.is_empty()) + .is_some(); + let sanitized = strip_chatgpt_oauth_for_openai_route(&parts.headers, provider, has_openai_env); let mut upstream = state.http.get(upstream_url); for (name, value) in &sanitized { if should_forward_request_header(name) { diff --git a/crates/cli/src/server.rs b/crates/cli/src/server.rs index 1c62a1a4..b6de3c54 100644 --- a/crates/cli/src/server.rs +++ b/crates/cli/src/server.rs @@ -42,7 +42,8 @@ pub(crate) async fn serve(config: GatewayConfig) -> Result<(), CliError> { CliError::Launch(format!( "cannot bind {} — port is already in use. Most likely cause: another \ `nemo-flow` daemon is already running. Fix one of:\n \ - • kill the running daemon: `pkill -f nemo-flow`\n \ + • stop the running daemon (Unix: `pkill -f nemo-flow`, Windows: \ + `taskkill /IM nemo-flow.exe`)\n \ • use an ephemeral port: `nemo-flow --bind 127.0.0.1:0`\n \ • pick a free port: `nemo-flow --bind 127.0.0.1:4041`", config.bind diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index c8f2700e..7136e758 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -87,7 +87,7 @@ pub(crate) fn detect_installed_agents() -> Vec { detect_installed_agents_in(std::env::var_os("PATH").as_deref()) } -fn detect_installed_agents_in(path_var: Option<&std::ffi::OsStr>) -> Vec { +pub(crate) fn detect_installed_agents_in(path_var: Option<&std::ffi::OsStr>) -> Vec { let Some(path_var) = path_var else { return Vec::new(); }; @@ -557,10 +557,13 @@ fn ask_openai_base_url( .with_initial_text(initial) .interact_text() .map_err(setup_error)?; - if url == "https://api.openai.com" { + // Treat blank input the same as accepting the default — otherwise `openai_base_url = ""` + // lands in config.toml and the launcher tries to use an empty URL on the next run. + let url = url.trim(); + if url.is_empty() || url == "https://api.openai.com" { Ok(None) } else { - Ok(Some(url)) + Ok(Some(url.to_string())) } } diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs index 231d53b5..e32fba2f 100644 --- a/crates/cli/tests/coverage/doctor_tests.rs +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -96,10 +96,10 @@ fn format_human_emits_fixed_section_order() { } #[test] -fn format_human_reports_all_checks_passed_on_clean_report() { +fn format_human_reports_no_failing_checks_on_clean_report() { let report = empty_report(); let rendered = format_human(&report); - assert!(rendered.contains("All checks passed.")); + assert!(rendered.contains("No failing checks.")); } #[test] diff --git a/crates/cli/tests/coverage/gateway_tests.rs b/crates/cli/tests/coverage/gateway_tests.rs index d239508c..3b15abe9 100644 --- a/crates/cli/tests/coverage/gateway_tests.rs +++ b/crates/cli/tests/coverage/gateway_tests.rs @@ -214,7 +214,8 @@ fn strips_chatgpt_plus_jwt_from_openai_route_inbound() { "authorization", HeaderValue::from_static("Bearer eyJhbGciOiJIUzI1NiJ9.deadbeef.signature"), ); - let sanitized = strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses); + let sanitized = + strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses, true); assert!(sanitized.get("authorization").is_none()); } @@ -227,7 +228,8 @@ fn preserves_real_bearer_keys_on_openai_route() { "authorization", HeaderValue::from_static("Bearer sk-real-provider-key"), ); - let sanitized = strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses); + let sanitized = + strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses, true); assert_eq!( sanitized.get("authorization").unwrap(), "Bearer sk-real-provider-key" @@ -245,7 +247,22 @@ fn does_not_touch_anthropic_route_authorization() { HeaderValue::from_static("Bearer eyJ.anthropic.case"), ); let sanitized = - strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::AnthropicMessages); + strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::AnthropicMessages, true); + assert!(sanitized.get("authorization").is_some()); +} + +#[test] +fn preserves_jwt_when_no_replacement_key_available() { + // If OPENAI_API_KEY isn't set the gateway has nothing to inject after stripping, so leave + // the inbound bearer in place. Stripping would silently de-auth setups that point at an + // upstream which happens to accept the ChatGPT-Plus token. + let mut inbound = HeaderMap::new(); + inbound.insert( + "authorization", + HeaderValue::from_static("Bearer eyJhbGciOiJIUzI1NiJ9.deadbeef.signature"), + ); + let sanitized = + strip_chatgpt_oauth_for_openai_route(&inbound, ProviderRoute::OpenAiResponses, false); assert!(sanitized.get("authorization").is_some()); } diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index 3d55940a..53fe0964 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -20,27 +20,14 @@ fn detect_installed_agents_finds_binaries_on_path() { std::fs::set_permissions(&path, std::fs::Permissions::from_mode(0o755)).unwrap(); } - // SAFETY: we restore PATH on drop via the guard below. Tests are not run concurrently - // within the same binary by default (cargo test --jobs 1 for parallel safety), and we - // do not assert on agent ordering or unrelated PATH entries. - let original_path = std::env::var_os("PATH"); - unsafe { - std::env::set_var("PATH", temp.path()); - } - - let detected = detect_installed_agents(); + // Use the pure-function variant that takes PATH as an arg instead of mutating the global + // env var. Tests run in parallel by default; touching `std::env::set_var("PATH", ...)` would + // race with every other test that reads the environment. + let detected = detect_installed_agents_in(Some(temp.path().as_os_str())); assert!(detected.contains(&CodingAgent::ClaudeCode)); assert!(detected.contains(&CodingAgent::Cursor)); assert!(!detected.contains(&CodingAgent::Codex)); assert!(!detected.contains(&CodingAgent::Hermes)); - - unsafe { - if let Some(value) = original_path { - std::env::set_var("PATH", value); - } else { - std::env::remove_var("PATH"); - } - } } #[test] From a55c0d465e4a821d0698dad2f2e3f27e163cd883 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 09:12:13 -0700 Subject: [PATCH 05/15] test(cli): gate windows-incompatible launcher e2e on cfg(unix) The fake-agent flow needs argv[0] to be a script literally named after a known agent. On Windows that requires cmd.exe wrapping, which loses the inference. Skip on Windows until the launcher grows real Windows support. Signed-off-by: Ajay Thorve --- crates/cli/tests/coverage/launcher_tests.rs | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/crates/cli/tests/coverage/launcher_tests.rs b/crates/cli/tests/coverage/launcher_tests.rs index fb1b6627..861b96bb 100644 --- a/crates/cli/tests/coverage/launcher_tests.rs +++ b/crates/cli/tests/coverage/launcher_tests.rs @@ -543,6 +543,13 @@ fn cursor_dry_run_does_not_write_hooks() { std::env::set_current_dir(previous).unwrap(); } +// This e2e test relies on argv[0] being a script literally named after a known agent (so +// `CodingAgent::infer` recognises the basename without an explicit `--agent`). On Windows the +// only practical way to invoke a `.cmd` / `.bat` shim is via `cmd.exe /C script.cmd`, which +// makes argv[0] = `cmd.exe` and breaks inference. Gating Unix-only keeps cross-platform CI +// green; real Windows agent-spawn coverage can come back with a `.exe` fake binary once the +// launcher grows Windows support. +#[cfg(unix)] #[tokio::test] async fn run_starts_gateway_injects_env_and_returns_agent_exit_code() { let temp = tempfile::tempdir().unwrap(); @@ -593,20 +600,6 @@ fn fake_agent_command(temp: &Path, output: &Path) -> Vec { vec![script.display().to_string()] } -#[cfg(windows)] -fn fake_agent_command(temp: &Path, output: &Path) -> Vec { - let script = temp.join("fake-agent.cmd"); - std::fs::write( - &script, - format!( - "@echo off\r\n \"{}\"\r\nexit /b 7\r\n", - output.display() - ), - ) - .unwrap(); - vec!["cmd.exe".into(), "/C".into(), script.display().to_string()] -} - #[tokio::test] async fn dry_run_does_not_spawn_agent() { let command = RunCommand { From 465928f0736906b4570983a45562719594c5e5c3 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 09:26:07 -0700 Subject: [PATCH 06/15] fix(cli): more CodeRabbit follow-ups (XDG, replace semantics, stderr) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - setup: respect XDG_CONFIG_HOME for global config writes, preview, and re-load (shared helper + config::user_config_dir). - setup: wizard-owned [upstream]/[observability]/[export] sections now use replace semantics — re-running with the default actually clears a prior override. - doctor: footer distinguishes pass vs pass-with-warns; kill_on_drop on the agent version probe so a hung child can't leak past the timeout; global config path now uses the same XDG-aware resolver. - gateway: clarify the JWT-detection comment (Bearer eyJ prefix only; opaque OAuth tokens unaffected). - launcher: live-status frame goes to stderr and is suppressed entirely when stdout is non-TTY, so piped agent output stays clean. - config: rename AgentConfigs::claude_code -> AgentConfigs::claude to match the CLI rename. Signed-off-by: Ajay Thorve --- crates/cli/src/config.rs | 16 ++++++--- crates/cli/src/doctor.rs | 36 +++++++++++++++---- crates/cli/src/gateway.rs | 16 ++++----- crates/cli/src/launcher.rs | 39 +++++++++++++-------- crates/cli/src/setup.rs | 42 ++++++++++++++++++----- crates/cli/tests/coverage/doctor_tests.rs | 5 +-- crates/cli/tests/coverage/setup_tests.rs | 9 +++-- 7 files changed, 116 insertions(+), 47 deletions(-) diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index 082fdfd7..6d105586 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -315,7 +315,7 @@ pub(crate) struct ResolvedConfig { #[derive(Debug, Clone, Default)] pub(crate) struct AgentConfigs { - pub(crate) claude_code: AgentCommandConfig, + pub(crate) claude: AgentCommandConfig, pub(crate) codex: AgentCommandConfig, pub(crate) cursor: CursorAgentConfig, pub(crate) hermes: AgentCommandConfig, @@ -594,10 +594,18 @@ fn find_project_config(start: &std::path::Path) -> Option { // Resolves the user config using XDG first and HOME/USERPROFILE second. Returning `None` keeps // config loading portable in minimal environments where no home directory is visible. fn user_config_path() -> Option { + user_config_dir().map(|dir| dir.join("config.toml")) +} + +/// Resolves the nemo-flow user config DIRECTORY (without trailing filename) using the same XDG +/// rules as `user_config_path`. Exposed so wizard/doctor code paths that write to or display +/// the global location stay in sync with the loader — without this, hard-coded +/// `$HOME/.config/nemo-flow` references silently ignore `$XDG_CONFIG_HOME`. +pub(crate) fn user_config_dir() -> Option { if let Some(base) = std::env::var_os("XDG_CONFIG_HOME") { - return Some(PathBuf::from(base).join("nemo-flow/config.toml")); + return Some(PathBuf::from(base).join("nemo-flow")); } - home_dir().map(|home| home.join(".config/nemo-flow/config.toml")) + home_dir().map(|home| home.join(".config/nemo-flow")) } // Applies the typed TOML config model to the resolved runtime config. Missing sections and fields @@ -681,7 +689,7 @@ fn apply_file_agents_config(agents: &mut AgentConfigs, file_agents: Option Option { cmd.arg("--version") .stdout(Stdio::piped()) .stderr(Stdio::null()) - .stdin(Stdio::null()); + .stdin(Stdio::null()) + // Ensure the child gets killed if our future is dropped on timeout. Without this a + // misbehaving agent binary that exceeds NETWORK_TIMEOUT would leak as an orphan + // process for the lifetime of the doctor invocation (and beyond). + .kill_on_drop(true); let child = cmd.spawn().ok()?; let output = timeout(NETWORK_TIMEOUT, child.wait_with_output()) .await @@ -392,6 +399,20 @@ pub(crate) fn exit_code(report: &DoctorReport) -> u8 { u8::from(any_fail) } +// Returns true if any check in the report carries a `Warn` status. Used by the human footer to +// distinguish a fully-green report from one where everything passed but some checks issued +// warnings — both exit 0, but the wording shouldn't. +fn report_has_warn(report: &DoctorReport) -> bool { + report + .observability + .iter() + .chain(report.completions.iter()) + .any(|c| matches!(c.status, Status::Warn)) + || matches!(report.configuration.workspace.status, Status::Warn) + || matches!(report.configuration.global.status, Status::Warn) + || matches!(report.configuration.system.status, Status::Warn) +} + /// Renders the doctor report in the fixed human-readable layout the design doc shows. Sections /// stay in the same order across runs so users can diff across machines. The banner header lives /// in `crate::banner::print_doctor_header` (called from `run_doctor` before this renders) so the @@ -458,10 +479,11 @@ pub(crate) fn format_human(report: &DoctorReport) -> String { out.push('\n'); if exit_code(report) == 0 { - // Don't say "All checks passed" — `Warn` results still map to exit code 0, so a clean - // exit just means nothing is failing, not that everything is green. This wording keeps - // the footer accurate when the report carries warnings. - out.push_str(" No failing checks.\n"); + if report_has_warn(report) { + out.push_str(" All checks passed, but some issued warnings; see details above.\n"); + } else { + out.push_str(" All checks passed.\n"); + } } else { out.push_str(" Some checks FAILED; see details above.\n"); } diff --git a/crates/cli/src/gateway.rs b/crates/cli/src/gateway.rs index ee0fd58a..88444878 100644 --- a/crates/cli/src/gateway.rs +++ b/crates/cli/src/gateway.rs @@ -568,14 +568,14 @@ async fn forward_upstream_request( upstream.send().await } -// Removes ChatGPT-Plus OAuth JWTs from inbound `Authorization` on OpenAI routes. Codex 0.130 -// keeps sending the JWT from `~/.codex/auth.json` even when its provider override declares -// `requires_openai_auth=false`, and the JWT is a consumer token rejected by `api.openai.com` / -// LiteLLM-fronted endpoints (NVIDIA's `inference-api.nvidia.com`) with 401. By dropping the JWT -// here, `inject_provider_auth` then injects `OPENAI_API_KEY` from environment and the upstream -// sees a valid bearer token. Hermes-style clients that send a real `sk-...` API key are not -// affected — the JWT detector only triggers on `Bearer eyJ...` (base64 JSON header). Tracks -// NMF-86. +// Removes JWT-shaped bearer tokens from inbound `Authorization` on OpenAI routes when we have +// a replacement `OPENAI_API_KEY` to inject. The detector triggers strictly on the `Bearer eyJ` +// prefix (base64-encoded JSON header), which is what Codex 0.130 sends from `~/.codex/auth.json` +// — that JWT is a consumer ChatGPT-Plus token rejected by `api.openai.com` / LiteLLM-fronted +// endpoints (NVIDIA's `inference-api.nvidia.com`) with 401. After stripping, `inject_provider_auth` +// substitutes the env-provided key and the upstream sees valid auth. ChatGPT OAuth flows in +// general may use opaque tokens too, but those don't match the prefix and are forwarded as-is. +// Real `sk-...` API keys are likewise unaffected. Tracks NMF-86. fn strip_chatgpt_oauth_for_openai_route( headers: &HeaderMap, route: ProviderRoute, diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index ff8efc71..cb88ee99 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -215,7 +215,7 @@ fn resolved_agent(command: &RunCommand, argv: &[String]) -> Result Option> { let command = match agent { - CodingAgent::ClaudeCode => agents.claude_code.command.as_ref(), + CodingAgent::ClaudeCode => agents.claude.command.as_ref(), CodingAgent::Codex => agents.codex.command.as_ref(), CodingAgent::Cursor => agents.cursor.command.as_ref(), CodingAgent::Hermes => agents.hermes.command.as_ref(), @@ -485,10 +485,17 @@ impl PreparedRun { // Prints a compact pre-launch status banner so users see at a glance where their observability // data is going (gateway URL, ATIF dir, OpenInference endpoint) before the agent's own UI takes - // over the terminal. Distinct from `print()` which is the verbose `--print` / `--dry-run` dump - // intended for inspection — this banner is always-on for live runs and wears the same - // NVIDIA-green rounded border as the intro banner so the brand frame stays consistent. + // over the terminal. Always emitted on stderr so it never contaminates piped/redirected agent + // output, and suppressed entirely when stdout is not a TTY — scripts capturing the agent stream + // get a clean pipe, interactive users still get the bordered frame. Distinct from `print()`, + // which is the verbose `--print` / `--dry-run` dump intended for inspection. fn print_live_status(&self, agent: CodingAgent, gateway_url: &str, resolved: &ResolvedConfig) { + // Suppress entirely on non-TTY stdout: when the user redirects the agent's stream to a + // file or pipes it into another tool, no banner should appear ahead of that output. + if !std::io::IsTerminal::is_terminal(&std::io::stdout()) { + return; + } + let mut lines: Vec = Vec::new(); lines.push(format!("NeMo Flow → {}", agent.as_arg())); lines.push(format!(" Gateway {gateway_url}")); @@ -507,25 +514,26 @@ impl PreparedRun { } } - let use_color = std::io::IsTerminal::is_terminal(&std::io::stdout()) + // Color decisions key off stderr (where we actually emit), not stdout. + let use_color = std::io::IsTerminal::is_terminal(&std::io::stderr()) && std::env::var_os("NO_COLOR").is_none(); let max_w = lines.iter().map(|l| l.chars().count()).max().unwrap_or(0); // 1-char padding on each side of the longest line. let inner = max_w + 2; - println!(); - print_border_line('╭', '╮', inner, use_color); + eprintln!(); + eprint_border_line('╭', '╮', inner, use_color); for line in &lines { let pad = max_w - line.chars().count(); let body = format!(" {line}{spaces} ", spaces = " ".repeat(pad)); if use_color { - println!("\x1b[38;5;112m│\x1b[0m{body}\x1b[38;5;112m│\x1b[0m"); + eprintln!("\x1b[38;5;112m│\x1b[0m{body}\x1b[38;5;112m│\x1b[0m"); } else { - println!("│{body}│"); + eprintln!("│{body}│"); } } - print_border_line('╰', '╯', inner, use_color); - println!(); + eprint_border_line('╰', '╯', inner, use_color); + eprintln!(); } // Prints the resolved transparent-run plan, including dynamic gateway URL, upstream base URLs, @@ -602,13 +610,14 @@ fn codex_gateway_provider_config(gateway_url: &str) -> String { } // Prints one horizontal border line for the live-status frame in NVIDIA green when color is -// enabled, otherwise plain ASCII-compatible box-drawing. -fn print_border_line(left: char, right: char, inner_width: usize, color: bool) { +// enabled, otherwise plain ASCII-compatible box-drawing. Writes to stderr so the banner doesn't +// contaminate piped/redirected agent stdout. +fn eprint_border_line(left: char, right: char, inner_width: usize, color: bool) { let dashes = "─".repeat(inner_width); if color { - println!("\x1b[38;5;112m{left}{dashes}{right}\x1b[0m"); + eprintln!("\x1b[38;5;112m{left}{dashes}{right}\x1b[0m"); } else { - println!("{left}{dashes}{right}"); + eprintln!("{left}{dashes}{right}"); } } diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index 7136e758..2e2c02bd 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -192,7 +192,7 @@ pub(crate) fn save_config( written.push(path); } if matches!(scope, ConfigScope::Global | ConfigScope::Both) { - let global_dir = home.join(".config").join("nemo-flow"); + let global_dir = global_config_dir(home); std::fs::create_dir_all(&global_dir)?; let path = global_dir.join("config.toml"); write_or_merge(&path, doc, merge_scope)?; @@ -201,6 +201,16 @@ pub(crate) fn save_config( Ok(written) } +// Resolves the global nemo-flow config directory. Prefers `$XDG_CONFIG_HOME/nemo-flow` (matches +// `config::user_config_dir`), falling back to `/.config/nemo-flow`. Tests that pass a +// tempdir for `home` get hermetic paths unless they set XDG_CONFIG_HOME explicitly. +fn global_config_dir(home: &Path) -> PathBuf { + if let Some(base) = std::env::var_os("XDG_CONFIG_HOME") { + return PathBuf::from(base).join("nemo-flow"); + } + home.join(".config").join("nemo-flow") +} + // Writes the wizard-built `doc` to `path`. When `merge_scope` is `Some(agent)` and the file // already exists, preserves any `[agents.]` blocks while replacing the shared sections // and the target agent's block. When `merge_scope` is `None`, just overwrites the file. @@ -222,24 +232,40 @@ fn write_or_merge( .parse() .map_err(|err| CliError::Config(format!("could not parse existing config: {err}")))?; let agent_key = agent_key_and_command(agent).0; - merge_section(&mut existing, doc, "observability"); - merge_section(&mut existing, doc, "export"); + // Wizard-owned sections use REPLACE semantics: if the user re-runs setup and the new doc + // omits a section, the previous override is removed too. Otherwise accepting the default + // (e.g. dropping a custom `openai_base_url`) could not actually revert the override — + // the old value would silently survive. + replace_section(&mut existing, doc, "observability"); + replace_section(&mut existing, doc, "export"); + replace_section(&mut existing, doc, "upstream"); + // `plugins` is not wizard-owned (users may hand-edit it). Preserve on omission. merge_section(&mut existing, doc, "plugins"); - merge_section(&mut existing, doc, "upstream"); merge_agents_entry(&mut existing, doc, agent_key); std::fs::write(path, existing.to_string())?; Ok(()) } // Copies a top-level section from `src` into `dst`, replacing any existing entry under the same -// key. If `src` does not contain the section, the existing entry in `dst` is left as-is — that -// preserves shared settings like `[upstream]` that the wizard does not touch. +// key. If `src` does not contain the section, the existing entry in `dst` is left as-is. +// Use for shared/hand-edited sections the wizard does not own. fn merge_section(dst: &mut DocumentMut, src: &DocumentMut, key: &str) { if let Some(item) = src.get(key) { dst[key] = item.clone(); } } +// Like `merge_section`, but when `src` omits the key the existing entry in `dst` is removed. +// Use for wizard-owned sections (the wizard's output is authoritative for these keys). +fn replace_section(dst: &mut DocumentMut, src: &DocumentMut, key: &str) { + match src.get(key) { + Some(item) => dst[key] = item.clone(), + None => { + dst.remove(key); + } + } +} + // Replaces the single `[agents.]` block in `dst` with the one from `src`. If `src` does // not contain that block, the existing entry in `dst` is left as-is. fn merge_agents_entry(dst: &mut DocumentMut, src: &DocumentMut, agent_key: &str) { @@ -463,7 +489,7 @@ fn read_existing_defaults() -> Option { let workspace_path = cwd.join(".nemo-flow").join("config.toml"); let global_path = home .as_ref() - .map(|h| h.join(".config").join("nemo-flow").join("config.toml")); + .map(|h| global_config_dir(h).join("config.toml")); let workspace_exists = workspace_path.exists(); let global_exists = global_path.as_ref().is_some_and(|p| p.exists()); @@ -793,7 +819,7 @@ fn preview_paths(scope: ConfigScope, cwd: &Path, home: &Path) -> Vec { paths.push(cwd.join(".nemo-flow").join("config.toml")); } if matches!(scope, ConfigScope::Global | ConfigScope::Both) { - paths.push(home.join(".config").join("nemo-flow").join("config.toml")); + paths.push(global_config_dir(home).join("config.toml")); } paths } diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs index e32fba2f..2f004efc 100644 --- a/crates/cli/tests/coverage/doctor_tests.rs +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -96,10 +96,11 @@ fn format_human_emits_fixed_section_order() { } #[test] -fn format_human_reports_no_failing_checks_on_clean_report() { +fn format_human_reports_all_checks_passed_on_clean_report() { let report = empty_report(); let rendered = format_human(&report); - assert!(rendered.contains("No failing checks.")); + assert!(rendered.contains("All checks passed.")); + assert!(!rendered.contains("warnings")); } #[test] diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index 53fe0964..530e3c41 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -206,7 +206,7 @@ command = "codex --full-auto" assert!(merged.contains("[observability]")); assert!(merged.contains("[agents.claude]")); assert!(merged.contains(r#"command = "claude""#)); - // Other agents and untouched sections survive. + // Other agents (not touched by this scoped run) survive. assert!( merged.contains("[agents.codex]"), "expected scoped merge to preserve [agents.codex], got:\n{merged}" @@ -215,9 +215,12 @@ command = "codex --full-auto" merged.contains("codex --full-auto"), "expected scoped merge to preserve codex command, got:\n{merged}" ); + // `[upstream]` is wizard-owned: the new doc omits it (no custom openai_base_url), so the + // prior override must be cleared. If we preserved it, accepting the default in a re-run + // could not actually revert a custom upstream URL. assert!( - merged.contains("http://old-openai"), - "expected scoped merge to preserve untouched [upstream], got:\n{merged}" + !merged.contains("http://old-openai"), + "expected scoped merge to clear stale [upstream] override when new doc omits it, got:\n{merged}" ); // Old claude command should be gone. assert!( From 0a72df2f22788a94bbac0f390850d36c2dfeebec Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 10:11:36 -0700 Subject: [PATCH 07/15] test(cli): isolate XDG_CONFIG_HOME in setup save-config test CI runners set XDG_CONFIG_HOME (e.g. /home/runner/.config), which the new XDG-aware global_config_dir() picks up and uses in preference to the test's per-call home tempdir. Add an env-mutex guard that clears the var for the test's lifetime so the home-path-prefix assertion is meaningful. Signed-off-by: Ajay Thorve --- crates/cli/tests/coverage/setup_tests.rs | 45 ++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index 530e3c41..c308a5d1 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -2,6 +2,50 @@ // SPDX-License-Identifier: Apache-2.0 use super::*; +use std::sync::{Mutex, OnceLock}; + +// Tests that exercise the global-config write path must run serially with respect to each other +// because `save_config` reads `$XDG_CONFIG_HOME`. CI runners commonly set this var to a real +// `/home/runner/.config` path, which would override the per-test `home` tempdir and make +// path-prefix assertions racy/false. This guard clears the var for the duration of the test +// and restores its prior value on drop. +fn xdg_env_lock() -> &'static Mutex<()> { + static LOCK: OnceLock> = OnceLock::new(); + LOCK.get_or_init(|| Mutex::new(())) +} + +struct XdgScope<'a> { + _guard: std::sync::MutexGuard<'a, ()>, + prev: Option, +} + +impl<'a> XdgScope<'a> { + fn cleared() -> Self { + let guard = xdg_env_lock().lock().unwrap_or_else(|e| e.into_inner()); + let prev = std::env::var_os("XDG_CONFIG_HOME"); + // SAFETY: We hold the mutex for the lifetime of this scope, and the only other tests + // that touch XDG_CONFIG_HOME also go through this guard. Restored on drop. + unsafe { + std::env::remove_var("XDG_CONFIG_HOME"); + } + Self { + _guard: guard, + prev, + } + } +} + +impl<'a> Drop for XdgScope<'a> { + fn drop(&mut self) { + // SAFETY: see `cleared()` above — the env mutex is still held. + unsafe { + match self.prev.take() { + Some(value) => std::env::set_var("XDG_CONFIG_HOME", value), + None => std::env::remove_var("XDG_CONFIG_HOME"), + } + } + } +} // Stub-binary detection relies on the Unix executable bit. Windows-side agent presence checks // use a different mechanism (e.g. `.exe` extension matching), so this lookup test is gated to @@ -231,6 +275,7 @@ command = "codex --full-auto" #[test] fn save_config_writes_both_scopes_when_both_selected() { + let _xdg = XdgScope::cleared(); let answers = SetupAnswers { scope: ConfigScope::Both, agents: vec![], From 6f4f1c35c3fed17657d331c85ede639bc6d41636 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 10:57:04 -0700 Subject: [PATCH 08/15] fix(cli): remaining CodeRabbit follow-ups - config: --config now signals daemon intent so bare nemo-flow --config X routes to the gateway instead of dropping into the wizard. - launcher: easy-path setup trigger honors explicit --config and forwards the path to the synthetic RunCommand so run() reads the same file. - gateway: treat whitespace-only OPENAI_API_KEY as missing; trim injected key value before building the Bearer header. - setup: merge_agents_entry tolerates non-table 'agents' by replacing with a fresh table (no more panic on malformed user files). - setup: 'config --reset ' against a doc with no [agents] table now reports 'nothing to reset' instead of falsely reporting success. - doctor: add warn-only footer test to lock in the pass-with-warnings branch landed earlier. Signed-off-by: Ajay Thorve --- crates/cli/src/config.rs | 10 +++-- crates/cli/src/gateway.rs | 11 +++++- crates/cli/src/launcher.rs | 15 +++++++- crates/cli/src/setup.rs | 45 +++++++++++++++-------- crates/cli/tests/coverage/doctor_tests.rs | 18 +++++++++ 5 files changed, 76 insertions(+), 23 deletions(-) diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index 6d105586..9acef9ec 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -164,16 +164,18 @@ pub(crate) struct ServerArgs { } impl ServerArgs { - /// True when the user passed any daemon-specific server flag on the CLI. Used by the bare - /// `nemo-flow` dispatch to choose between "user wants the gateway daemon" (any daemon flag - /// present) and "user just typed the bare command" (start the setup wizard). `--config` is - /// excluded — it's relevant to every subcommand, not a daemon-mode signal. + /// True when the user passed any flag that signals "I want the gateway, not the wizard." Used + /// by the bare `nemo-flow` dispatch to choose between launching the long-running daemon and + /// dropping into setup. `--config` is included: someone running `nemo-flow --config ` + /// with no subcommand has explicitly pointed at a config file, which is only meaningful for + /// daemon startup — the wizard creates configs, it doesn't consume them. pub(crate) fn requested_daemon_mode(&self) -> bool { self.bind.is_some() || self.openai_base_url.is_some() || self.anthropic_base_url.is_some() || self.atif_dir.is_some() || self.openinference_endpoint.is_some() + || self.config.is_some() } } diff --git a/crates/cli/src/gateway.rs b/crates/cli/src/gateway.rs index 88444878..fcd4f538 100644 --- a/crates/cli/src/gateway.rs +++ b/crates/cli/src/gateway.rs @@ -553,9 +553,11 @@ async fn forward_upstream_request( // the upstream just receives no auth and 401s, which is no better than letting it reject the // JWT itself — and stripping silently can break setups that point the gateway at an upstream // that happens to accept the ChatGPT-Plus token. + // Whitespace-only keys are effectively missing: stripping the inbound JWT and injecting an + // empty/whitespace bearer just trades one 401 for another while losing observability. let has_openai_env = std::env::var("OPENAI_API_KEY") .ok() - .filter(|v| !v.is_empty()) + .filter(|v| !v.trim().is_empty()) .is_some(); let sanitized = strip_chatgpt_oauth_for_openai_route(headers, route, has_openai_env); let mut upstream = http.request(method.clone(), url).body(body_bytes.clone()); @@ -645,6 +647,9 @@ where let Some(value) = env_lookup(env_var) else { return builder; }; + // Trim before testing emptiness — a value of " " is no more useful than "" and sending + // `Bearer ` with leading whitespace can confuse upstream auth parsers further down. + let value = value.trim().to_string(); if value.is_empty() { return builder; } @@ -727,9 +732,11 @@ pub(crate) async fn models( .map(|p| p.as_str()) .unwrap_or(parts.uri.path()), ); + // Whitespace-only keys are effectively missing: stripping the inbound JWT and injecting an + // empty/whitespace bearer just trades one 401 for another while losing observability. let has_openai_env = std::env::var("OPENAI_API_KEY") .ok() - .filter(|v| !v.is_empty()) + .filter(|v| !v.trim().is_empty()) .is_some(); let sanitized = strip_chatgpt_oauth_for_openai_route(&parts.headers, provider, has_openai_env); let mut upstream = state.http.get(upstream_url); diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index cb88ee99..8d7ab260 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -47,7 +47,16 @@ pub(crate) async fn easy_path( command: EasyPathCommand, inherited: Option<&ServerArgs>, ) -> Result { - if !any_config_file_exists() { + // Explicit `--config ` short-circuits the discovery-based setup trigger: when the + // user has pointed at a specific file, that file is the contract — fire setup only if it + // doesn't exist yet, and never run setup just because no config lives at any default + // discovery location. + let explicit_config = inherited.and_then(|args| args.config.as_deref()); + let needs_setup = match explicit_config { + Some(path) => !path.exists(), + None => !any_config_file_exists(), + }; + if needs_setup { // No config anywhere — fire setup inline, scoped to the agent the user typed. After // it returns, config discovery will pick up the freshly-written `config.toml` and // `run()` below will see a populated environment. If setup errors (non-TTY, user @@ -56,7 +65,9 @@ pub(crate) async fn easy_path( } let synthetic = RunCommand { agent: Some(agent), - config: None, + // Forward the explicit config path so `run` parses the same file the user asked for, + // rather than re-discovering from defaults. + config: explicit_config.map(std::path::Path::to_path_buf), openai_base_url: None, anthropic_base_url: None, atif_dir: None, diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index 2e2c02bd..ef1dfa43 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -276,12 +276,18 @@ fn merge_agents_entry(dst: &mut DocumentMut, src: &DocumentMut, agent_key: &str) else { return; }; - if !dst.contains_key("agents") { + // Defensive: if the existing config has `agents = "literal"` or `agents = [...]` (anything + // not a table) the original `.as_table_mut().unwrap()` panicked. Replace any non-table + // value with a fresh table so a malformed user file degrades to an overwrite, not a crash. + let needs_init = dst + .get("agents") + .is_none_or(|item| item.as_table().is_none()); + if needs_init { dst["agents"] = Item::Table(Table::new()); } let agents_table = dst["agents"] .as_table_mut() - .expect("agents key was just inserted as a table"); + .expect("agents key is a table after the init guard above"); agents_table.insert(agent_key, src_agent.clone()); } @@ -310,19 +316,28 @@ pub(crate) fn reset(agent_hint: Option) -> Result<(), CliError> { let mut doc: DocumentMut = raw.parse().map_err(|err| { CliError::Config(format!("could not parse existing config: {err}")) })?; - if let Some(agents) = doc.get_mut("agents").and_then(Item::as_table_mut) { - if agents.remove(agent_key).is_none() { - println!( - " No `[agents.{agent_key}]` block to reset in {}", - path.display() - ); - return Ok(()); - } - // Remove the empty `[agents]` table itself so the file stays tidy when no agent - // entries remain. - if agents.is_empty() { - doc.remove("agents"); - } + // Three reasons we have nothing to remove: no `[agents]` table at all, the `agents` + // key holds a non-table value, or the table is missing this specific agent's block. + // In every case we must report "nothing to reset" and skip the write — silently + // printing "✓ Removed" when nothing changed misleads the user about file state. + let Some(agents) = doc.get_mut("agents").and_then(Item::as_table_mut) else { + println!( + " No `[agents.{agent_key}]` block to reset in {}", + path.display() + ); + return Ok(()); + }; + if agents.remove(agent_key).is_none() { + println!( + " No `[agents.{agent_key}]` block to reset in {}", + path.display() + ); + return Ok(()); + } + // Remove the empty `[agents]` table itself so the file stays tidy when no agent + // entries remain. + if agents.is_empty() { + doc.remove("agents"); } std::fs::write(&path, doc.to_string())?; println!(" ✓ Removed `[agents.{agent_key}]` from {}", path.display()); diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs index 2f004efc..d0effda8 100644 --- a/crates/cli/tests/coverage/doctor_tests.rs +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -115,6 +115,24 @@ fn format_human_reports_failure_summary_when_anything_failed() { assert!(rendered.contains("Some checks FAILED")); } +#[test] +fn format_human_distinguishes_pass_with_warnings_from_clean_pass() { + let mut report = empty_report(); + report.observability.push(Check { + name: "ATIF dir", + status: Status::Warn, + details: "directory missing — will be created on first write".into(), + }); + let rendered = format_human(&report); + // Exit code stays 0 (warns don't fail), but the footer must call out that warnings exist + // so users aren't lulled by an "All checks passed." string. + assert!(rendered.contains("All checks passed")); + assert!( + rendered.contains("warnings"), + "warn-only report should surface the word `warnings` in the footer, got:\n{rendered}" + ); +} + #[test] fn format_json_is_stable_and_versioned() { let report = empty_report(); From 957823ff8ca63683b7e83277731239c88f9ee7c2 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 12:27:23 -0700 Subject: [PATCH 09/15] Improve CLI doctor and exporter config Signed-off-by: Ajay Thorve --- crates/cli/src/banner.rs | 168 ++-------- crates/cli/src/config.rs | 131 ++++++-- crates/cli/src/doctor.rs | 350 +++++++++++++++++--- crates/cli/src/launcher.rs | 16 +- crates/cli/src/main.rs | 11 +- crates/cli/src/session.rs | 56 +++- crates/cli/src/setup.rs | 89 +++-- crates/cli/tests/cli_tests.rs | 53 +++ crates/cli/tests/coverage/banner_tests.rs | 58 +--- crates/cli/tests/coverage/config_tests.rs | 69 +++- crates/cli/tests/coverage/doctor_tests.rs | 102 +++++- crates/cli/tests/coverage/gateway_tests.rs | 11 +- crates/cli/tests/coverage/launcher_tests.rs | 27 ++ crates/cli/tests/coverage/server_tests.rs | 4 +- crates/cli/tests/coverage/session_tests.rs | 82 ++--- crates/cli/tests/coverage/setup_tests.rs | 20 +- integrations/coding-agents/README.md | 21 +- 17 files changed, 872 insertions(+), 396 deletions(-) diff --git a/crates/cli/src/banner.rs b/crates/cli/src/banner.rs index e59af4ff..180159cd 100644 --- a/crates/cli/src/banner.rs +++ b/crates/cli/src/banner.rs @@ -1,25 +1,22 @@ // SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. // SPDX-License-Identifier: Apache-2.0 -//! Slanted ANSI-Shadow "NeMo Flow" banner with a tracer dot that curves over the brand. +//! Slanted ANSI-Shadow "NeMo Flow" banner. //! //! Static art: filled block letters in NVIDIA green, each row shifted one column right of the -//! row above for an italic lean. Animation: a single bright dot enters from the top-left, -//! glides smoothly horizontally above "NeMo", dips through the gap between "NeMo" and "Flow", -//! glides horizontally below "Flow", and the banner then settles with a small "vX.Y.Z" tag in -//! green at the bottom-right. +//! row above for an italic lean. The settled frame includes a small "vX.Y.Z" tag in green at +//! the bottom-right. //! //! Three entry points: -//! - [`print_intro`] — wizard intro / bare `nemo-flow` (animated) -//! - [`print_doctor_header`] — settled static frame for `doctor` (no animation) +//! - [`print_intro`] — wizard intro / bare `nemo-flow` +//! - [`print_doctor_header`] — settled static frame for `doctor` //! - [`render_frame`] — pure helper for tests -use std::io::{IsTerminal, Write}; -use std::time::Duration; +use std::io::IsTerminal; /// Filled-block NeMo Flow figlet with a per-row right shift so the letters lean italic. Six -/// content rows; the renderer prepends one blank row above and appends one below to host the -/// tracer dot's path. +/// content rows; the renderer prepends one blank row above and appends one below for spacing +/// and the docked version tag. const BANNER_LINES: &[&str] = &[ " ███╗ ██╗███████╗███╗ ███╗ ██████╗ ███████╗██╗ ██████╗ ██╗ ██╗", " ████╗ ██║██╔════╝████╗ ████║██╔═══██╗ ██╔════╝██║ ██╔═══██╗██║ ██║", @@ -29,27 +26,20 @@ const BANNER_LINES: &[&str] = &[ " ╚═╝ ╚═══╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝ ╚═════╝ ╚══╝╚══╝", ]; -/// Banner geometry (visual rows including the dot's top and bottom rails). +/// Banner geometry (visual rows including the top and bottom spacing rails). const FIGLET_ROWS: usize = 6; -const TOP_RAIL: usize = 0; const BOTTOM_RAIL: usize = FIGLET_ROWS + 1; // row index of the row below the figlet const TOTAL_ROWS: usize = FIGLET_ROWS + 2; // top rail + 6 figlet rows + bottom rail -/// Tracer-dot path waypoints — measured in columns. The dot moves linearly in col across -/// frames; its row follows an S-shape (top rail → smooth descent → bottom rail) based on -/// which segment the column falls into. -const COL_START: usize = 13; // above the "N" of NeMo +/// Version tag position, measured in columns. const COL_END: usize = 92; // right edge below "Flow" -const COL_DIP_START: usize = 44; // start descending after we clear "NeMo" -const COL_DIP_END: usize = 56; // finish descending before we hit "Flow" const MIN_WIDTH: usize = 105; -// NVIDIA green on the figlet text and the surrounding border. The tracer head is a bright -// mint-green dot. The settled docked tag at bottom-right is dim green to read as a quiet -// version label without competing with the brand mark. +// NVIDIA green on the figlet text and the surrounding border. The settled docked tag at +// bottom-right is dim green to read as a quiet version label without competing with the brand +// mark. const NVIDIA_GREEN: &str = "\x1b[38;5;112m"; -const DOT_HEAD: &str = "\x1b[1;38;5;121m"; const DOCK_TAG: &str = "\x1b[2;38;5;112m"; const RESET: &str = "\x1b[0m"; @@ -87,47 +77,19 @@ fn terminal_width() -> Option { .or(Some(120)) } -/// Total animation frames for the tracer dot's traversal. Drives both the timing in -/// `animate_reveal` and the path-step helper used by tests. Higher count = smoother glide. -pub(crate) const TRACER_FRAMES: usize = 160; - -/// Returns the tracer dot's `(row, col)` at the given frame. The dot moves linearly in `col` -/// from `COL_START` to `COL_END` and follows an S-shape in `row`: stays on the top rail until -/// it has cleared "NeMo", smoothly descends through the gap, then stays on the bottom rail -/// until it exits below "Flow". `None` when the animation has finished. -pub(crate) fn tracer_position(frame: usize) -> Option<(usize, usize)> { - if frame >= TRACER_FRAMES { - return None; - } - let t = frame as f32 / (TRACER_FRAMES - 1).max(1) as f32; - let col = COL_START as f32 + (COL_END - COL_START) as f32 * t; - let col_usize = col as usize; - let row = if col_usize <= COL_DIP_START { - TOP_RAIL as f32 - } else if col_usize >= COL_DIP_END { - BOTTOM_RAIL as f32 - } else { - // Smooth ease (smoothstep) between top rail and bottom rail across the dip range. - let local = (col_usize - COL_DIP_START) as f32 / (COL_DIP_END - COL_DIP_START) as f32; - let eased = local * local * (3.0 - 2.0 * local); - TOP_RAIL as f32 + (BOTTOM_RAIL - TOP_RAIL) as f32 * eased - }; - Some((row.round() as usize, col_usize)) -} - -/// Pure renderer. `tracer` carries the dot's (row, col) for this frame, or `None` to render -/// the settled static banner. `color=false` strips all ANSI escapes. -pub(crate) fn render_frame(tracer: Option<(usize, usize)>, color: bool) -> String { - render_frame_inner(tracer, color, false) +/// Pure renderer for the static banner. `color=false` strips all ANSI escapes. +#[cfg(test)] +pub(crate) fn render_frame(color: bool) -> String { + render_frame_inner(color, false) } -/// Settled frame with a glowing "● vX.Y.Z" tag docked at the bottom-right under "Flow". Used -/// after the animation finishes and as the static frame for the doctor header. +/// Settled frame with a quiet "vX.Y.Z" tag docked at the bottom-right under "Flow". Used by +/// the intro and doctor header. pub(crate) fn render_docked_frame(color: bool) -> String { - render_frame_inner(None, color, true) + render_frame_inner(color, true) } -fn render_frame_inner(tracer: Option<(usize, usize)>, color: bool, docked: bool) -> String { +fn render_frame_inner(color: bool, docked: bool) -> String { let mut out = String::with_capacity(BANNER_LINES.iter().map(|l| l.len() + 64).sum()); out.push('\n'); @@ -171,14 +133,6 @@ fn render_frame_inner(tracer: Option<(usize, usize)>, color: bool, docked: bool) } } - // Overlay the tracer head only — no trail. Smooth motion comes from the higher frame count. - if let Some((row, col)) = tracer - && row < grid.len() - && col < grid[row].len() - { - grid[row][col] = '●'; - } - // Top border row. push_border_line(&mut out, BORDER_TL, BORDER_TR, max_width, color); @@ -205,14 +159,6 @@ fn render_frame_inner(tracer: Option<(usize, usize)>, color: bool, docked: bool) } else { out.push(*ch); } - } else if Some((row_idx, col_idx)) == tracer && *ch == '●' { - if color { - out.push_str(DOT_HEAD); - out.push(*ch); - out.push_str(RESET); - } else { - out.push('*'); - } } else if is_figlet_glyph(*ch) { if color { out.push_str(NVIDIA_GREEN); @@ -269,7 +215,7 @@ pub(crate) fn print_intro() { print_plain_header(); return; } - animate_reveal(); + print!("{}", render_docked_frame(true)); } pub(crate) fn print_doctor_header() { @@ -280,76 +226,6 @@ pub(crate) fn print_doctor_header() { print!("{}", render_docked_frame(true)); } -fn animate_reveal() { - // Smoothness strategy: - // 1. Print the static banner ONCE so the figlet never flickers. - // 2. Save cursor (DEC ESC 7), then per-frame restore + move-up + move-to-col to repaint - // just the dot cell. Erasing + repainting one cell is far cheaper than redrawing the - // full banner each frame and reads as continuous motion. - // 3. Skip frames where the integer column hasn't advanced — we'd just sleep and redraw - // the same cell, wasting time and breaking the perceived pace. - let frame_ms = 8u64; - let mut stdout = std::io::stdout(); - let _ = write!(stdout, "\x1b[?25l"); - // Paint the static banner. Cursor lands on the line just below the bottom rail. - let _ = write!(stdout, "{}", render_frame(None, true)); - // Save cursor position so each frame can restore back to this anchor before navigating. - let _ = write!(stdout, "\x1b7"); - let _ = stdout.flush(); - - let mut last_pos: Option<(usize, usize)> = None; - for f in 0..TRACER_FRAMES { - let Some((row, col)) = tracer_position(f) else { - break; - }; - // Skip duplicate-column frames — keeps motion paced even though we still sleep. - if last_pos == Some((row, col)) { - std::thread::sleep(Duration::from_millis(frame_ms)); - continue; - } - // Erase the previous dot (write a space at the old position). - if let Some((pr, pc)) = last_pos { - paint_cell(&mut stdout, pr, pc, ' ', None); - } - // Draw the current dot. - paint_cell(&mut stdout, row, col, '●', Some(DOT_HEAD)); - let _ = stdout.flush(); - last_pos = Some((row, col)); - std::thread::sleep(Duration::from_millis(frame_ms)); - } - - // Settle: erase the last dot and stamp the version tag at the dock spot. - if let Some((pr, pc)) = last_pos { - paint_cell(&mut stdout, pr, pc, ' ', None); - } - let dock_tag = format!(" v{}", env!("CARGO_PKG_VERSION")); - // Move to (BOTTOM_RAIL, COL_END) inside the border and write the dim-green tag. Anchor sits - // below the bottom border line; +1 vertical for the border, +1 horizontal for the left - // border. - let _ = write!(stdout, "\x1b8"); // restore to anchor below banner - let _ = write!(stdout, "\x1b[{}A", TOTAL_ROWS - BOTTOM_RAIL + 1); - let _ = write!(stdout, "\x1b[{}G", COL_END + 2); - let _ = write!(stdout, "{DOCK_TAG}{dock_tag}{RESET}"); - let _ = write!(stdout, "\x1b8"); - let _ = write!(stdout, "\x1b[?25h"); - let _ = stdout.flush(); -} - -/// Paint a single character at grid (row, col) relative to the anchor saved by `\x1b7` after -/// the static banner was printed. Accounts for the surrounding border: +1 row offset for the -/// bottom border line and +1 column for the left border. `color` is an optional SGR prefix -/// (RESET is always emitted after the char). Cursor is left at the anchor. -fn paint_cell(out: &mut std::io::Stdout, row: usize, col: usize, ch: char, color: Option<&str>) { - let _ = write!(out, "\x1b8"); - let _ = write!(out, "\x1b[{}A", TOTAL_ROWS - row + 1); - let _ = write!(out, "\x1b[{}G", col + 2); - if let Some(c) = color { - let _ = write!(out, "{c}{ch}{RESET}"); - } else { - let _ = write!(out, "{ch}"); - } -} - fn print_plain_header() { let version = env!("CARGO_PKG_VERSION"); println!(); diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index 9acef9ec..76393920 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -75,7 +75,7 @@ pub(crate) enum Command { Hermes(EasyPathCommand), /// Run the interactive setup (writes `.nemo-flow/config.toml`) Config(ConfigCommand), - /// Diagnose env, agents, config, observability (use `--json` for machine output) + /// Diagnose env, agents, config, observability (optionally scoped to one agent) Doctor(DoctorCommand), /// List supported and locally-detected agents (use `--json` for machine output) Agents(AgentsCommand), @@ -92,6 +92,9 @@ pub(crate) enum Command { /// so it doesn't pollute the help output of subcommands where it has no meaning. #[derive(Debug, Clone, Args)] pub(crate) struct DoctorCommand { + /// Limit readiness checks to one supported agent. + #[arg(value_enum)] + pub(crate) agent: Option, /// Emit machine-readable JSON instead of the formatted human report. Versioned via /// `schema_version`; stable shape for CI / evaluation harness consumption. #[arg(long)] @@ -158,6 +161,9 @@ pub(crate) struct ServerArgs { /// Directory to write ATIF trajectory JSON files into per session #[arg(long, env = "NEMO_FLOW_ATIF_DIR")] pub(crate) atif_dir: Option, + /// Directory to write per-event ATOF JSONL files into (one event per line, raw ATOF shape) + #[arg(long, env = "NEMO_FLOW_ATOF_DIR")] + pub(crate) atof_dir: Option, /// OpenInference-compatible OTLP HTTP endpoint for streaming spans (Phoenix, Arize, etc.) #[arg(long, env = "NEMO_FLOW_OPENINFERENCE_ENDPOINT")] pub(crate) openinference_endpoint: Option, @@ -174,6 +180,7 @@ impl ServerArgs { || self.openai_base_url.is_some() || self.anthropic_base_url.is_some() || self.atif_dir.is_some() + || self.atof_dir.is_some() || self.openinference_endpoint.is_some() || self.config.is_some() } @@ -184,12 +191,28 @@ pub(crate) struct GatewayConfig { pub(crate) bind: SocketAddr, pub(crate) openai_base_url: String, pub(crate) anthropic_base_url: String, - pub(crate) atif_dir: Option, - pub(crate) openinference_endpoint: Option, + pub(crate) exporters: ExportersConfig, pub(crate) metadata: Option, pub(crate) plugin_config: Option, } +/// Sinks the gateway writes observability data to. Grouped because every layer (CLI flags, env, +/// TOML, headers, session resolution) historically duplicated `atif_dir` / `openinference_endpoint` +/// side-by-side; adding `atof_dir` doubled that plumbing. This struct is the single seat where +/// exporter knobs live on the runtime model — flat CLI flags and TOML keys still exist for +/// ergonomics, but they all funnel into here. +/// +/// `atif_dir` — directory for per-session ATIF trajectory JSON files (one file per session). +/// `atof_dir` — directory for per-event ATOF JSONL streams (one event per line, raw event shape). +/// `openinference_endpoint` — OTLP HTTP endpoint for streaming OpenInference spans +/// (Phoenix / Arize / OTLP-compatible). +#[derive(Debug, Clone, Default)] +pub(crate) struct ExportersConfig { + pub(crate) atif_dir: Option, + pub(crate) atof_dir: Option, + pub(crate) openinference_endpoint: Option, +} + #[derive(Debug, Clone, Args)] pub(crate) struct HookForwardCommand { #[arg(value_enum)] @@ -199,6 +222,8 @@ pub(crate) struct HookForwardCommand { #[arg(long)] pub(crate) atif_dir: Option, #[arg(long)] + pub(crate) atof_dir: Option, + #[arg(long)] pub(crate) openinference_endpoint: Option, #[arg(long)] pub(crate) profile: Option, @@ -238,6 +263,8 @@ pub(crate) struct RunCommand { #[arg(long)] pub(crate) atif_dir: Option, #[arg(long)] + pub(crate) atof_dir: Option, + #[arg(long)] pub(crate) openinference_endpoint: Option, #[arg(long)] pub(crate) session_metadata: Option, @@ -274,8 +301,7 @@ pub(crate) enum GatewayMode { #[derive(Debug, Clone, Default)] pub(crate) struct SessionConfig { - pub(crate) atif_dir: Option, - pub(crate) openinference_endpoint: Option, + pub(crate) exporters: ExportersConfig, pub(crate) metadata: Option, pub(crate) plugin_config: Option, pub(crate) profile: Option, @@ -287,11 +313,16 @@ impl GatewayConfig { // Header JSON fields are parsed opportunistically; invalid JSON is treated as absent here // because install and hook-forward validate generated header values before sending them. pub(crate) fn session_config_from_headers(&self, headers: &HeaderMap) -> SessionConfig { - let atif_dir = header_string(headers, "x-nemo-flow-atif-dir") - .map(PathBuf::from) - .or_else(|| self.atif_dir.clone()); - let openinference_endpoint = header_string(headers, "x-nemo-flow-openinference-endpoint") - .or_else(|| self.openinference_endpoint.clone()); + let exporters = ExportersConfig { + atif_dir: header_string(headers, "x-nemo-flow-atif-dir") + .map(PathBuf::from) + .or_else(|| self.exporters.atif_dir.clone()), + atof_dir: header_string(headers, "x-nemo-flow-atof-dir") + .map(PathBuf::from) + .or_else(|| self.exporters.atof_dir.clone()), + openinference_endpoint: header_string(headers, "x-nemo-flow-openinference-endpoint") + .or_else(|| self.exporters.openinference_endpoint.clone()), + }; let metadata = header_json(headers, "x-nemo-flow-session-metadata").or_else(|| self.metadata.clone()); let plugin_config = header_json(headers, "x-nemo-flow-plugin-config") @@ -299,8 +330,7 @@ impl GatewayConfig { let profile = header_string(headers, "x-nemo-flow-config-profile"); let gateway_mode = header_string(headers, "x-nemo-flow-gateway-mode"); SessionConfig { - atif_dir, - openinference_endpoint, + exporters, metadata, plugin_config, profile, @@ -356,6 +386,7 @@ impl Default for CursorAgentConfig { #[derive(Debug, Clone, Default, Deserialize)] struct FileConfig { upstream: Option, + exporters: Option, observability: Option, export: Option, plugins: Option, @@ -371,12 +402,19 @@ struct FileUpstreamConfig { #[derive(Debug, Clone, Default, Deserialize)] struct FileObservabilityConfig { atif_dir: Option, + atof_dir: Option, metadata: Option, } -// `[export.]` stays nested so future per-backend config (headers, timeout, protocol) -// can live alongside `endpoint` without flattening into a wall of `_*` keys at the -// observability layer. +#[derive(Debug, Clone, Default, Deserialize)] +struct FileExportersConfig { + atif_dir: Option, + atof_dir: Option, + openinference_endpoint: Option, +} + +// Legacy `[export.]` shape. New configs use `[exporters]`; this stays readable so +// existing user files do not break. #[derive(Debug, Clone, Default, Deserialize)] struct FileExportConfig { openinference: Option, @@ -428,8 +466,7 @@ impl Default for GatewayConfig { .expect("valid default bind address"), openai_base_url: "https://api.openai.com".into(), anthropic_base_url: "https://api.anthropic.com".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, } @@ -488,10 +525,13 @@ fn apply_run_url_overrides(config: &mut GatewayConfig, command: &RunCommand) { config.anthropic_base_url = value.clone(); } if let Some(value) = &command.atif_dir { - config.atif_dir = Some(value.clone()); + config.exporters.atif_dir = Some(value.clone()); + } + if let Some(value) = &command.atof_dir { + config.exporters.atof_dir = Some(value.clone()); } if let Some(value) = &command.openinference_endpoint { - config.openinference_endpoint = Some(value.clone()); + config.exporters.openinference_endpoint = Some(value.clone()); } } @@ -523,10 +563,13 @@ fn apply_server_overrides(config: &mut GatewayConfig, args: &ServerArgs) { config.anthropic_base_url = value.clone(); } if let Some(value) = &args.atif_dir { - config.atif_dir = Some(value.clone()); + config.exporters.atif_dir = Some(value.clone()); + } + if let Some(value) = &args.atof_dir { + config.exporters.atof_dir = Some(value.clone()); } if let Some(value) = &args.openinference_endpoint { - config.openinference_endpoint = Some(value.clone()); + config.exporters.openinference_endpoint = Some(value.clone()); } } @@ -620,6 +663,7 @@ fn apply_file_config(resolved: &mut ResolvedConfig, value: toml::Value) -> Resul apply_file_upstream_config(&mut resolved.gateway, config.upstream); apply_file_observability_config(&mut resolved.gateway, config.observability); apply_file_export_config(&mut resolved.gateway, config.export); + apply_file_exporters_config(&mut resolved.gateway, config.exporters); apply_file_plugins_config(&mut resolved.gateway, config.plugins); apply_file_agents_config(&mut resolved.agents, config.agents); Ok(()) @@ -639,10 +683,9 @@ fn apply_file_upstream_config(gateway: &mut GatewayConfig, upstream: Option, @@ -651,16 +694,17 @@ fn apply_file_observability_config( return; }; if let Some(value) = observability.atif_dir { - gateway.atif_dir = Some(value); + gateway.exporters.atif_dir = Some(value); + } + if let Some(value) = observability.atof_dir { + gateway.exporters.atof_dir = Some(value); } if let Some(value) = observability.metadata { gateway.metadata = Some(value); } } -// Applies optional OpenInference export config. The nested shape leaves room for future -// exporter-specific fields (e.g., `headers`, `timeout`, `protocol`) without flattening into -// a wall of `openinference_*` keys at the observability layer. +// Applies legacy optional OpenInference export config. New configs use `[exporters]`. fn apply_file_export_config(gateway: &mut GatewayConfig, export: Option) { let Some(export) = export else { return; @@ -668,7 +712,27 @@ fn apply_file_export_config(gateway: &mut GatewayConfig, export: Option, +) { + let Some(exporters) = exporters else { + return; + }; + if let Some(value) = exporters.atif_dir { + gateway.exporters.atif_dir = Some(value); + } + if let Some(value) = exporters.atof_dir { + gateway.exporters.atof_dir = Some(value); + } + if let Some(value) = exporters.openinference_endpoint { + gateway.exporters.openinference_endpoint = Some(value); } } @@ -723,10 +787,13 @@ fn apply_env_config(config: &mut GatewayConfig) { config.anthropic_base_url = value; } if let Some(value) = std::env::var_os("NEMO_FLOW_ATIF_DIR") { - config.atif_dir = Some(PathBuf::from(value)); + config.exporters.atif_dir = Some(PathBuf::from(value)); + } + if let Some(value) = std::env::var_os("NEMO_FLOW_ATOF_DIR") { + config.exporters.atof_dir = Some(PathBuf::from(value)); } if let Ok(value) = std::env::var("NEMO_FLOW_OPENINFERENCE_ENDPOINT") { - config.openinference_endpoint = Some(value); + config.exporters.openinference_endpoint = Some(value); } } diff --git a/crates/cli/src/doctor.rs b/crates/cli/src/doctor.rs index b290d237..429c9da1 100644 --- a/crates/cli/src/doctor.rs +++ b/crates/cli/src/doctor.rs @@ -9,7 +9,7 @@ //! - `DoctorReport` is the resulting pure data shape. //! - `format_human(&report)` / `format_json(&report)` render the report. -use std::path::PathBuf; +use std::path::{Path, PathBuf}; use std::process::Stdio; use std::time::Duration; @@ -17,7 +17,7 @@ use serde::Serialize; use tokio::time::timeout; use crate::config::{ - CodingAgent, GatewayConfig, ResolvedConfig, ServerArgs, resolve_server_config, + AgentConfigs, CodingAgent, GatewayConfig, ResolvedConfig, ServerArgs, resolve_server_config, }; use crate::error::CliError; @@ -40,7 +40,7 @@ pub(crate) enum Status { Warn, Fail, /// The check ran but no relevant state was detected — purely informational (e.g. an agent - /// not on $PATH). Renders as a dim dot; not counted toward exit code. + /// not on $PATH). Renders as a dot; not counted toward exit code. Info, } @@ -50,6 +50,7 @@ pub(crate) enum Status { pub(crate) struct DoctorReport { pub schema_version: u32, pub binary_version: &'static str, + pub target_agent: Option, pub environment: EnvironmentInfo, pub configuration: ConfigurationInfo, pub agents: Vec, @@ -70,18 +71,23 @@ pub(crate) struct ConfigurationInfo { pub global: ConfigLayer, pub system: ConfigLayer, pub default_agent: Option, + pub configured_agents: Vec, } #[derive(Debug, Clone, Serialize)] pub(crate) struct ConfigLayer { pub path: PathBuf, pub status: Status, + pub active: bool, pub details: String, } #[derive(Debug, Clone, Serialize)] pub(crate) struct AgentInfo { pub name: &'static str, + pub status: Status, + pub configured: bool, + pub command: String, pub path: Option, pub version: Option, /// Free-form annotation, e.g. "hooks: installed" once we wire up hook detection. @@ -91,17 +97,21 @@ pub(crate) struct AgentInfo { /// Drives all checks and produces a single `DoctorReport`. Network probes are bounded by a /// short timeout so the command always returns quickly. Filesystem checks short-circuit on /// the first missing directory. -pub(crate) async fn collect_report() -> Result { +pub(crate) async fn collect_report( + target_agent: Option, +) -> Result { let resolved = resolve_server_config(&ServerArgs::default()).unwrap_or_default(); let cwd = std::env::current_dir().ok(); let home = home_dir(); + let configured_agents = configured_agent_names(&resolved.agents); Ok(DoctorReport { schema_version: 1, binary_version: env!("CARGO_PKG_VERSION"), + target_agent: target_agent.map(|agent| agent.as_arg().to_string()), environment: collect_environment(), - configuration: collect_configuration(cwd.as_deref(), home.as_deref()), - agents: collect_agents().await, + configuration: collect_configuration(cwd.as_deref(), home.as_deref(), configured_agents), + agents: collect_agents(target_agent, &resolved).await, observability: collect_observability(&resolved.gateway).await, completions: collect_completions(home.as_deref()), }) @@ -131,8 +141,9 @@ fn os_version() -> String { } fn collect_configuration( - cwd: Option<&std::path::Path>, - home: Option<&std::path::Path>, + cwd: Option<&Path>, + home: Option<&Path>, + configured_agents: Vec, ) -> ConfigurationInfo { let workspace_path = cwd .map(|p| p.join(".nemo-flow").join("config.toml")) @@ -152,14 +163,16 @@ fn collect_configuration( // `default_agent` is reserved in the design for Phase 2 dispatch; not currently parsed // out of FileConfig. Doctor reports `None` until that lands. default_agent: None, + configured_agents, } } -fn layer_status(path: &std::path::Path) -> ConfigLayer { +fn layer_status(path: &Path) -> ConfigLayer { if !path.exists() { return ConfigLayer { path: path.to_path_buf(), status: Status::Info, + active: false, details: "not present".into(), }; } @@ -171,23 +184,29 @@ fn layer_status(path: &std::path::Path) -> ConfigLayer { Ok(_) => ConfigLayer { path: path.to_path_buf(), status: Status::Pass, + active: true, details: "valid".into(), }, Err(err) => ConfigLayer { path: path.to_path_buf(), status: Status::Fail, + active: false, details: format!("invalid TOML: {err}"), }, }, Err(err) => ConfigLayer { path: path.to_path_buf(), status: Status::Fail, + active: false, details: format!("unreadable: {err}"), }, } } -async fn collect_agents() -> Vec { +async fn collect_agents( + target_agent: Option, + resolved: &ResolvedConfig, +) -> Vec { let supported = [ (CodingAgent::ClaudeCode, "claude", "claude"), (CodingAgent::Codex, "codex", "codex"), @@ -195,17 +214,45 @@ async fn collect_agents() -> Vec { (CodingAgent::Hermes, "hermes", "hermes"), ]; let mut out = Vec::with_capacity(supported.len()); - for (_, display_name, exec) in supported { - let path = which_on_path(exec); + for (agent, display_name, default_exec) in supported { + if target_agent.is_some_and(|target| target != agent) { + continue; + } + let configured = agent_configured(agent, &resolved.agents); + let target_requested = target_agent == Some(agent); + let command = agent_command(agent, &resolved.agents, default_exec); + let exec = command_executable(&command); + let path = which_command(exec); let version = match &path { Some(p) => probe_version(p).await, None => None, }; + let mut status = agent_command_status(path.as_deref(), configured, target_requested); + let (hook_status, hook_details) = + hook_status(agent, &resolved.agents, configured || target_requested); + status = combine_status(status, hook_status, configured || target_requested); + let mut details = Vec::new(); + details.push(if configured { + "configured".to_string() + } else if target_requested { + "not configured; first run will launch setup".to_string() + } else { + "not configured".to_string() + }); + if path.is_none() { + details.push(format!("command `{exec}` not found")); + } + if !hook_details.is_empty() { + details.push(hook_details); + } out.push(AgentInfo { name: display_name, + status, + configured, + command, path, version, - annotation: String::new(), + annotation: details.join("; "), }); } out @@ -218,7 +265,154 @@ fn which_on_path(exec: &str) -> Option { .find(|candidate| candidate.is_file()) } -async fn probe_version(binary: &std::path::Path) -> Option { +fn which_command(exec: &str) -> Option { + let candidate = Path::new(exec); + if candidate.components().count() > 1 || candidate.is_absolute() { + return candidate.is_file().then(|| candidate.to_path_buf()); + } + which_on_path(exec) +} + +fn command_executable(command: &str) -> &str { + command.split_whitespace().next().unwrap_or(command) +} + +fn agent_command(agent: CodingAgent, agents: &AgentConfigs, default_exec: &str) -> String { + configured_agent_command(agent, agents) + .cloned() + .unwrap_or_else(|| default_exec.to_string()) +} + +fn configured_agent_command(agent: CodingAgent, agents: &AgentConfigs) -> Option<&String> { + match agent { + CodingAgent::ClaudeCode => agents.claude.command.as_ref(), + CodingAgent::Codex => agents.codex.command.as_ref(), + CodingAgent::Cursor => agents.cursor.command.as_ref(), + CodingAgent::Hermes => agents.hermes.command.as_ref(), + } +} + +fn agent_configured(agent: CodingAgent, agents: &AgentConfigs) -> bool { + configured_agent_command(agent, agents).is_some() + || (matches!(agent, CodingAgent::Hermes) && agents.hermes.hooks_path.is_some()) +} + +fn configured_agent_names(agents: &AgentConfigs) -> Vec { + [ + (CodingAgent::ClaudeCode, "claude"), + (CodingAgent::Codex, "codex"), + (CodingAgent::Cursor, "cursor"), + (CodingAgent::Hermes, "hermes"), + ] + .into_iter() + .filter_map(|(agent, name)| agent_configured(agent, agents).then_some(name.to_string())) + .collect() +} + +fn agent_command_status(path: Option<&Path>, configured: bool, target_requested: bool) -> Status { + match (path.is_some(), configured, target_requested) { + (true, false, true) => Status::Warn, + (true, _, _) => Status::Pass, + (false, true, _) | (false, _, true) => Status::Fail, + (false, false, false) => Status::Info, + } +} + +fn combine_status(base: Status, hook: Status, readiness_required: bool) -> Status { + if matches!(base, Status::Fail) || matches!(hook, Status::Fail) { + return Status::Fail; + } + if matches!(base, Status::Warn) || (readiness_required && matches!(hook, Status::Warn)) { + return Status::Warn; + } + base +} + +fn hook_status( + agent: CodingAgent, + agents: &AgentConfigs, + readiness_required: bool, +) -> (Status, String) { + match agent { + CodingAgent::ClaudeCode | CodingAgent::Codex => { + (Status::Pass, "hooks: injected during run".into()) + } + CodingAgent::Cursor if agents.cursor.patch_restore_hooks => { + (Status::Pass, "hooks: patched during run".into()) + } + CodingAgent::Cursor => hook_file_status( + cursor_hooks_path(), + CodingAgent::Cursor, + readiness_required, + "hooks: user-managed", + ), + CodingAgent::Hermes => match agents.hermes.hooks_path.as_deref() { + Some(path) => hook_file_status( + Ok(path.to_path_buf()), + CodingAgent::Hermes, + readiness_required, + "hooks", + ), + None if readiness_required => ( + Status::Fail, + "hooks: not installed; run `nemo-flow config hermes`".into(), + ), + None => (Status::Info, "hooks: not configured".into()), + }, + } +} + +fn hook_file_status( + path: Result, + agent: CodingAgent, + readiness_required: bool, + label: &str, +) -> (Status, String) { + let path = match path { + Ok(path) => path, + Err(err) => { + return ( + Status::Fail, + format!("{label}: could not resolve path: {err}"), + ); + } + }; + match std::fs::read_to_string(&path) { + Ok(raw) if raw.contains(&format!("hook-forward {}", agent.as_arg())) => ( + Status::Pass, + format!("{label}: installed at {}", path.display()), + ), + Ok(_) if readiness_required => ( + Status::Fail, + format!("{label}: missing NeMo Flow hook in {}", path.display()), + ), + Ok(_) => ( + Status::Info, + format!("{label}: no NeMo Flow hook in {}", path.display()), + ), + Err(error) if error.kind() == std::io::ErrorKind::NotFound && readiness_required => { + (Status::Fail, format!("{label}: missing {}", path.display())) + } + Err(error) if error.kind() == std::io::ErrorKind::NotFound => { + (Status::Info, format!("{label}: missing {}", path.display())) + } + Err(error) => ( + Status::Fail, + format!("{label}: could not read {}: {error}", path.display()), + ), + } +} + +fn cursor_hooks_path() -> Result { + let cwd = std::env::current_dir()?; + let project = cwd + .ancestors() + .find(|ancestor| ancestor.join(".cursor").is_dir()) + .unwrap_or(cwd.as_path()); + Ok(project.join(".cursor/hooks.json")) +} + +async fn probe_version(binary: &Path) -> Option { // Spawn ` --version` and read the first line of stdout. Bounded by the network // timeout (re-used as a generic short timeout) so a misbehaving binary doesn't hang doctor. let mut cmd = tokio::process::Command::new(binary); @@ -247,7 +441,7 @@ async fn probe_version(binary: &std::path::Path) -> Option { async fn collect_observability(gateway: &GatewayConfig) -> Vec { let mut checks = Vec::new(); - checks.push(match &gateway.atif_dir { + checks.push(match &gateway.exporters.atif_dir { None => Check { name: "ATIF dir", status: Status::Info, @@ -257,7 +451,15 @@ async fn collect_observability(gateway: &GatewayConfig) -> Vec { Ok(()) => Check { name: "ATIF dir", status: Status::Pass, - details: format!("{} (writable)", path.display()), + details: format!("{} (appears writable)", path.display()), + }, + Err(err) if err.kind() == std::io::ErrorKind::NotFound => Check { + name: "ATIF dir", + status: Status::Warn, + details: format!( + "{}: not present; runtime will create it on export", + path.display() + ), }, Err(err) => Check { name: "ATIF dir", @@ -267,7 +469,35 @@ async fn collect_observability(gateway: &GatewayConfig) -> Vec { }, }); - checks.push(match &gateway.openinference_endpoint { + checks.push(match &gateway.exporters.atof_dir { + None => Check { + name: "ATOF dir", + status: Status::Info, + details: "not configured".into(), + }, + Some(path) => match check_dir_writable(path) { + Ok(()) => Check { + name: "ATOF dir", + status: Status::Pass, + details: format!("{} (appears writable)", path.display()), + }, + Err(err) if err.kind() == std::io::ErrorKind::NotFound => Check { + name: "ATOF dir", + status: Status::Warn, + details: format!( + "{}: not present; runtime will create it on first event", + path.display() + ), + }, + Err(err) => Check { + name: "ATOF dir", + status: Status::Fail, + details: format!("{}: {err}", path.display()), + }, + }, + }); + + checks.push(match &gateway.exporters.openinference_endpoint { None => Check { name: "OpenInference endpoint", status: Status::Info, @@ -279,18 +509,20 @@ async fn collect_observability(gateway: &GatewayConfig) -> Vec { checks } -fn check_dir_writable(dir: &std::path::Path) -> Result<(), std::io::Error> { - use std::fs::OpenOptions; - std::fs::create_dir_all(dir)?; - // PID-suffixed name + create_new=true so we can never overwrite a real user file even if - // they happen to have a `.nemo-flow-write-probe` of their own. The probe is removed - // immediately; the file just witnesses that we have write access here. - let probe = dir.join(format!(".nemo-flow-write-probe-{}", std::process::id())); - OpenOptions::new() - .write(true) - .create_new(true) - .open(&probe)?; - std::fs::remove_file(&probe).ok(); +fn check_dir_writable(dir: &Path) -> Result<(), std::io::Error> { + let metadata = std::fs::metadata(dir)?; + if !metadata.is_dir() { + return Err(std::io::Error::new( + std::io::ErrorKind::InvalidInput, + "path is not a directory", + )); + } + if metadata.permissions().readonly() { + return Err(std::io::Error::new( + std::io::ErrorKind::PermissionDenied, + "directory is read-only", + )); + } Ok(()) } @@ -393,6 +625,10 @@ pub(crate) fn exit_code(report: &DoctorReport) -> u8 { .iter() .chain(report.completions.iter()) .any(|c| matches!(c.status, Status::Fail)) + || report + .agents + .iter() + .any(|agent| matches!(agent.status, Status::Fail)) || matches!(report.configuration.workspace.status, Status::Fail) || matches!(report.configuration.global.status, Status::Fail) || matches!(report.configuration.system.status, Status::Fail); @@ -408,6 +644,10 @@ fn report_has_warn(report: &DoctorReport) -> bool { .iter() .chain(report.completions.iter()) .any(|c| matches!(c.status, Status::Warn)) + || report + .agents + .iter() + .any(|agent| matches!(agent.status, Status::Warn)) || matches!(report.configuration.workspace.status, Status::Warn) || matches!(report.configuration.global.status, Status::Warn) || matches!(report.configuration.system.status, Status::Warn) @@ -421,6 +661,9 @@ pub(crate) fn format_human(report: &DoctorReport) -> String { let mut out = String::new(); out.push_str(&format!("\n NeMo Flow {}\n", report.binary_version)); out.push_str(" ─────────────────────────────────────────────\n"); + if let Some(agent) = &report.target_agent { + out.push_str(&format!(" Target agent {agent}\n\n")); + } out.push_str(" Environment\n"); out.push_str(&format!( " OS {}\n", @@ -445,22 +688,35 @@ pub(crate) fn format_human(report: &DoctorReport) -> String { " System {}\n", format_layer(&report.configuration.system) )); + if !report.configuration.configured_agents.is_empty() { + out.push_str(&format!( + " Agents {}\n", + report.configuration.configured_agents.join(", ") + )); + } out.push('\n'); out.push_str(" Agents detected\n"); for agent in &report.agents { + let status = format_status(agent.status); match &agent.path { Some(path) => { let version = agent.version.as_deref().unwrap_or("(unknown version)"); out.push_str(&format!( - " {:<8} {}\n {}\n", + " {} {:<8} {}\n command {}\n path {}\n {}\n", + status, agent.name, version, - path.display() + agent.command, + path.display(), + agent.annotation )); } None => { - out.push_str(&format!(" {:<8} not on $PATH\n", agent.name)); + out.push_str(&format!( + " {} {:<8} not on $PATH\n command {}\n {}\n", + status, agent.name, agent.command, agent.annotation + )); } } } @@ -491,7 +747,17 @@ pub(crate) fn format_human(report: &DoctorReport) -> String { } fn format_layer(layer: &ConfigLayer) -> String { - format!("{} {}", layer.path.display(), layer.details) + let active = if layer.active { " (loaded)" } else { "" }; + format!("{} {}{}", layer.path.display(), layer.details, active) +} + +fn format_status(status: Status) -> &'static str { + match status { + Status::Pass => "✓", + Status::Warn => "!", + Status::Fail => "✗", + Status::Info => "·", + } } /// Renders the doctor report as machine-readable JSON. Versioned via `schema_version` so @@ -504,7 +770,8 @@ pub(crate) fn format_json(report: &DoctorReport) -> Result { /// Runs `agents` — a thin wrapper over `collect_agents` that emits only the agent list. Shares /// the same JSON schema as `doctor.agents` for consistency. pub(crate) async fn agents_report() -> Vec { - collect_agents().await + let resolved = resolve_server_config(&ServerArgs::default()).unwrap_or_default(); + collect_agents(None, &resolved).await } /// Renders the agents listing in human form. @@ -528,8 +795,12 @@ pub(crate) fn format_agents_human(agents: &[AgentInfo]) -> String { .map(|p| p.display().to_string()) .unwrap_or_default(); out.push_str(&format!( - " {:<8} {}\n {}\n", - agent.name, version, path + " {} {:<8} {}\n {}\n {}\n", + format_status(agent.status), + agent.name, + version, + path, + agent.annotation )); } } @@ -545,8 +816,11 @@ pub(crate) fn format_agents_json(agents: &[AgentInfo]) -> Result Result { - let report = collect_report().await?; +pub(crate) async fn run_doctor( + target_agent: Option, + json: bool, +) -> Result { + let report = collect_report(target_agent).await?; if json { print!("{}", format_json(&report)?); } else { diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index 8d7ab260..38b95c4f 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -71,6 +71,7 @@ pub(crate) async fn easy_path( openai_base_url: None, anthropic_base_url: None, atif_dir: None, + atof_dir: None, openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -510,11 +511,15 @@ impl PreparedRun { let mut lines: Vec = Vec::new(); lines.push(format!("NeMo Flow → {}", agent.as_arg())); lines.push(format!(" Gateway {gateway_url}")); - match &resolved.gateway.atif_dir { + match &resolved.gateway.exporters.atif_dir { Some(path) => lines.push(format!(" ATIF {}", path.display())), None => lines.push(" ATIF (disabled)".to_string()), } - match &resolved.gateway.openinference_endpoint { + match &resolved.gateway.exporters.atof_dir { + Some(path) => lines.push(format!(" ATOF {}", path.display())), + None => lines.push(" ATOF (disabled)".to_string()), + } + match &resolved.gateway.exporters.openinference_endpoint { Some(endpoint) => lines.push(format!(" OpenInference {endpoint}")), None => lines.push(" OpenInference (disabled)".to_string()), } @@ -557,10 +562,13 @@ impl PreparedRun { "anthropic_base_url = {}", resolved.gateway.anthropic_base_url ); - if let Some(path) = &resolved.gateway.atif_dir { + if let Some(path) = &resolved.gateway.exporters.atif_dir { println!("atif_dir = {}", path.display()); } - if let Some(endpoint) = &resolved.gateway.openinference_endpoint { + if let Some(path) = &resolved.gateway.exporters.atof_dir { + println!("atof_dir = {}", path.display()); + } + if let Some(endpoint) = &resolved.gateway.exporters.openinference_endpoint { println!("openinference_endpoint = {endpoint}"); } println!("argv = {}", self.argv.join(" ")); diff --git a/crates/cli/src/main.rs b/crates/cli/src/main.rs index 5914aa53..52500f04 100644 --- a/crates/cli/src/main.rs +++ b/crates/cli/src/main.rs @@ -67,7 +67,7 @@ async fn run() -> Result { } Ok(ExitCode::SUCCESS) } - Some(Command::Doctor(command)) => doctor::run_doctor(command.json).await, + Some(Command::Doctor(command)) => doctor::run_doctor(command.agent, command.json).await, Some(Command::Agents(command)) => doctor::run_agents(command.json).await, Some(Command::Completions(command)) => { if command.install { @@ -97,14 +97,15 @@ async fn run() -> Result { // OpenInference endpoint), they obviously want the long-running gateway daemon — // keep that path so existing scripts that explicitly invoke daemon mode stay // compatible. - // - Otherwise — no flags, no subcommand — interpret it as "I just typed nemo-flow, - // tell me what to do" and run the setup wizard. This matches the design intent - // ("bare invocation enters guided setup") instead of failing on a port bind that - // the user never asked for. + // - Otherwise — no flags, no subcommand — use the first-run path only when no config + // exists. Once configured, bare `nemo-flow` becomes a quick health check; explicit + // `nemo-flow config` remains the reconfiguration path. if cli.server.requested_daemon_mode() { let config = config::resolve_server_config(&cli.server)?; server::serve(config.gateway).await?; Ok(ExitCode::SUCCESS) + } else if config::any_config_file_exists() { + doctor::run_doctor(None, false).await } else { setup::run(None).await?; Ok(ExitCode::SUCCESS) diff --git a/crates/cli/src/session.rs b/crates/cli/src/session.rs index 45f36001..2d8e7efd 100644 --- a/crates/cli/src/session.rs +++ b/crates/cli/src/session.rs @@ -20,6 +20,7 @@ use nemo_flow::api::tool::{ ToolCallEndParams, ToolCallParams, ToolHandle, tool_call, tool_call_end, }; use nemo_flow::observability::atif::{AtifAgentInfo, AtifExporter}; +use nemo_flow::observability::atof::{AtofExporter, AtofExporterConfig}; use nemo_flow::observability::openinference::{OpenInferenceConfig, OpenInferenceSubscriber}; use serde_json::{Map, Value, json}; use tokio::sync::Mutex; @@ -99,6 +100,7 @@ struct Session { last_llm_owner: Option, config: SessionConfig, atif: Option, + atof: Option, openinference: Option, } @@ -356,6 +358,7 @@ impl Session { last_llm_owner: None, config, atif: None, + atof: None, openinference: None, } } @@ -396,7 +399,7 @@ impl Session { if self.agent_scope.is_none() { return Ok(()); } - if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.atif_dir) { + if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.exporters.atif_dir) { write_atif(directory, &self.session_id, exporter)?; } Ok(()) @@ -523,19 +526,50 @@ impl Session { Ok(()) } - // Installs configured exporters exactly once per session root. ATIF and OpenInference are - // scope-local subscribers so they disappear with the session and do not affect unrelated + // Installs configured exporters exactly once per session root. ATIF, ATOF, and OpenInference + // are scope-local subscribers so they disappear with the session and do not affect unrelated // concurrent agent runs. fn install_observers(&mut self, root: &ScopeHandle) -> Result<(), CliError> { self.install_atif_observer(root)?; + self.install_atof_observer(root)?; self.install_openinference_observer(root)?; Ok(()) } + // Registers the ATOF JSONL exporter once when a session has an ATOF directory configured. + // The file is named after the session id so concurrent sessions never share a writer. + // Append mode keeps existing per-session files intact across re-runs of the same session id + // (e.g., a resumed conversation). + fn install_atof_observer(&mut self, root: &ScopeHandle) -> Result<(), CliError> { + if self.atof.is_some() { + return Ok(()); + } + let Some(directory) = self.config.exporters.atof_dir.clone() else { + return Ok(()); + }; + // Ensure the directory exists; AtofExporter opens the file via OpenOptions which won't + // create parent dirs. Failure is non-fatal — surfaced as a CliError so the caller can + // decide to continue without ATOF rather than aborting the whole session. + std::fs::create_dir_all(&directory).map_err(|err| { + CliError::Config(format!( + "could not create ATOF directory {}: {err}", + directory.display() + )) + })?; + let config = AtofExporterConfig::default() + .with_output_directory(directory) + .with_filename(format!("{}.jsonl", self.session_id)); + let exporter = AtofExporter::new(config) + .map_err(|err| CliError::Config(format!("could not open ATOF file: {err}")))?; + scope_register_subscriber(&root.uuid, "gateway-atof", exporter.subscriber())?; + self.atof = Some(exporter); + Ok(()) + } + // Registers the ATIF exporter once when a session has ATIF output configured. The exporter keeps // the session agent metadata so downstream trajectory files can be attributed to this run. fn install_atif_observer(&mut self, root: &ScopeHandle) -> Result<(), CliError> { - if self.atif.is_some() || self.config.atif_dir.is_none() { + if self.atif.is_some() || self.config.exporters.atif_dir.is_none() { return Ok(()); } let exporter = AtifExporter::new( @@ -559,7 +593,7 @@ impl Session { if self.openinference.is_some() { return Ok(()); } - let Some(endpoint) = &self.config.openinference_endpoint else { + let Some(endpoint) = &self.config.exporters.openinference_endpoint else { return Ok(()); }; let subscriber = OpenInferenceSubscriber::new( @@ -912,9 +946,19 @@ impl Session { subscriber.force_flush()?; subscriber.shutdown()?; } - if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.atif_dir) { + if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.exporters.atif_dir) { write_atif(directory, &self.session_id, exporter)?; } + // ATOF writes per-event JSONL as events arrive; flush + shutdown here just ensure the + // BufWriter is drained and the file is closed cleanly before the session record is dropped. + if let Some(exporter) = &self.atof { + exporter + .force_flush() + .map_err(|err| CliError::Config(format!("ATOF flush failed: {err}")))?; + exporter + .shutdown() + .map_err(|err| CliError::Config(format!("ATOF shutdown failed: {err}")))?; + } Ok(()) } diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index ef1dfa43..73fe6453 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -43,8 +43,10 @@ impl ConfigScope { /// One of the built-in observability backends offered in setup. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum ObservabilityBackend { - /// Local ATIF trajectory files. + /// Local ATIF trajectory files (one JSON file per session). Atif, + /// Local ATOF raw-event JSONL streams (one line per event, raw ATOF shape). + Atof, /// OpenInference spans streamed to an HTTP endpoint (Phoenix, Arize, OTLP-compatible). OpenInference, } @@ -53,6 +55,7 @@ impl ObservabilityBackend { fn label(self) -> &'static str { match self { Self::Atif => "ATIF trajectory files ./atif/ (recommended)", + Self::Atof => "ATOF event JSONL stream ./atof/ (raw events)", Self::OpenInference => { "OpenInference spans (Phoenix / Arize / OTLP)" } @@ -112,28 +115,32 @@ pub(crate) fn detect_installed_agents_in(path_var: Option<&std::ffi::OsStr>) -> /// Builds the TOML document that represents the setup's answers. Pure and testable. /// -/// The shape mirrors the schema landed in Phase 1: `[observability]`, `[export.openinference]`, -/// `[agents.]`. Sections are only emitted when the user opted into the corresponding -/// behavior so the resulting file stays minimal. +/// The shape mirrors the runtime model: exporter sinks live under `[exporters]`, agents under +/// `[agents.]`, and upstream overrides under `[upstream]`. Sections are only emitted when +/// the user opted into the corresponding behavior so the resulting file stays minimal. pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { let mut doc = DocumentMut::new(); - if answers.backends.contains(&ObservabilityBackend::Atif) { - let mut observability = Table::new(); - observability["atif_dir"] = value("./atif"); - doc["observability"] = Item::Table(observability); - } - - if answers + // Build the exporter table once so selecting multiple backends produces a single section with + // all enabled sinks, not separate legacy observability/export blocks. + let want_atif = answers.backends.contains(&ObservabilityBackend::Atif); + let want_atof = answers.backends.contains(&ObservabilityBackend::Atof); + let want_openinference = answers .backends .contains(&ObservabilityBackend::OpenInference) - && let Some(endpoint) = answers.openinference_endpoint.as_deref() - { - let mut export = Table::new(); - let mut openinference = Table::new(); - openinference["endpoint"] = value(endpoint); - export.insert("openinference", Item::Table(openinference)); - doc["export"] = Item::Table(export); + && answers.openinference_endpoint.is_some(); + if want_atif || want_atof || want_openinference { + let mut exporters = Table::new(); + if want_atif { + exporters["atif_dir"] = value("./atif"); + } + if want_atof { + exporters["atof_dir"] = value("./atof"); + } + if let Some(endpoint) = answers.openinference_endpoint.as_deref() { + exporters["openinference_endpoint"] = value(endpoint); + } + doc["exporters"] = Item::Table(exporters); } if !answers.agents.is_empty() { @@ -169,8 +176,8 @@ pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { /// Writes the setup's TOML document to the scope-appropriate path(s). /// /// When `merge_scope` is `Some(agent)`, an existing `config.toml` at the target path is parsed -/// and only the sections owned by THIS wizard run are replaced: `[observability]`, -/// `[export.openinference]`, `[plugins]`, and the single `[agents.]` block. Other +/// and only the sections owned by THIS wizard run are replaced: `[exporters]`, +/// legacy `[observability]` / `[export]`, `[plugins]`, and the single `[agents.]` block. Other /// `[agents.*]` blocks are preserved. When `merge_scope` is `None`, the file is overwritten /// outright with the wizard's full output (the user explicitly chose which agents to include). /// @@ -236,6 +243,7 @@ fn write_or_merge( // omits a section, the previous override is removed too. Otherwise accepting the default // (e.g. dropping a custom `openai_base_url`) could not actually revert the override — // the old value would silently survive. + replace_section(&mut existing, doc, "exporters"); replace_section(&mut existing, doc, "observability"); replace_section(&mut existing, doc, "export"); replace_section(&mut existing, doc, "upstream"); @@ -480,6 +488,7 @@ struct Defaults { scope: Option, agents: Vec, atif_enabled: bool, + atof_enabled: bool, openinference_endpoint: Option, openai_base_url: Option, } @@ -489,6 +498,7 @@ impl Defaults { self.scope.is_some() || !self.agents.is_empty() || self.atif_enabled + || self.atof_enabled || self.openinference_endpoint.is_some() || self.openai_base_url.is_some() } @@ -525,21 +535,31 @@ fn read_existing_defaults() -> Option { (false, false) => None, }; + let exporters = doc.get("exporters").and_then(|i| i.as_table()); + let legacy_observability = doc.get("observability").and_then(|i| i.as_table()); + let legacy_export = doc.get("export").and_then(|i| i.as_table()); + Some(Defaults { scope, agents: read_agents_from_doc(&doc), - atif_enabled: doc - .get("observability") - .and_then(|i| i.as_table()) - .and_then(|t| t.get("atif_dir")) - .is_some(), - openinference_endpoint: doc - .get("export") - .and_then(|i| i.as_table()) - .and_then(|t| t.get("openinference")) - .and_then(|i| i.as_table()) - .and_then(|t| t.get("endpoint")) + atif_enabled: exporters.and_then(|t| t.get("atif_dir")).is_some() + || legacy_observability + .and_then(|t| t.get("atif_dir")) + .is_some(), + atof_enabled: exporters.and_then(|t| t.get("atof_dir")).is_some() + || legacy_observability + .and_then(|t| t.get("atof_dir")) + .is_some(), + openinference_endpoint: exporters + .and_then(|t| t.get("openinference_endpoint")) .and_then(|i| i.as_str()) + .or_else(|| { + legacy_export + .and_then(|t| t.get("openinference")) + .and_then(|i| i.as_table()) + .and_then(|t| t.get("endpoint")) + .and_then(|i| i.as_str()) + }) .map(str::to_string), openai_base_url: doc .get("upstream") @@ -695,18 +715,21 @@ fn ask_backends( ) -> Result<(Vec, Option), CliError> { let options = [ ObservabilityBackend::Atif, + ObservabilityBackend::Atof, ObservabilityBackend::OpenInference, ]; let labels: Vec<&str> = options.iter().map(|b| b.label()).collect(); // Pre-check from existing config when present. On first run, falls back to ATIF on (zero - // infra) and OpenInference off (needs an endpoint running). + // infra, trajectory replay is the common case), ATOF off (raw event noise — users opt in), + // and OpenInference off (needs an endpoint running). let defaults = if existing.has_any() { [ existing.atif_enabled, + existing.atof_enabled, existing.openinference_endpoint.is_some(), ] } else { - [true, false] + [true, false, false] }; let selected_idx = MultiSelect::with_theme(theme) .with_prompt("Observability backends?") diff --git a/crates/cli/tests/cli_tests.rs b/crates/cli/tests/cli_tests.rs index 09dc21fa..8ff8761d 100644 --- a/crates/cli/tests/cli_tests.rs +++ b/crates/cli/tests/cli_tests.rs @@ -65,6 +65,59 @@ fn cli_easy_path_invokes_setup_when_no_config_found() { ); } +#[test] +fn cli_bare_invocation_invokes_setup_when_no_config_found() { + let temp = tempfile::tempdir().unwrap(); + let xdg = temp.path().join("xdg"); + std::fs::create_dir_all(&xdg).unwrap(); + let cwd = temp.path().join("workdir"); + std::fs::create_dir_all(&cwd).unwrap(); + + let output = Command::new(gateway_bin()) + .current_dir(&cwd) + .env("XDG_CONFIG_HOME", &xdg) + .env("HOME", temp.path()) + .output() + .unwrap(); + + assert!( + !output.status.success(), + "bare invocation should enter non-TTY setup when no config exists" + ); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("setup requires a TTY"), + "expected non-TTY setup error in stderr, got:\n{stderr}" + ); +} + +#[test] +fn cli_bare_invocation_runs_doctor_when_config_exists() { + let temp = tempfile::tempdir().unwrap(); + let xdg = temp.path().join("xdg"); + std::fs::create_dir_all(&xdg).unwrap(); + let cwd = temp.path().join("workdir"); + std::fs::create_dir_all(cwd.join(".nemo-flow")).unwrap(); + std::fs::write(cwd.join(".nemo-flow/config.toml"), "[observability]\n").unwrap(); + + let output = Command::new(gateway_bin()) + .current_dir(&cwd) + .env("XDG_CONFIG_HOME", &xdg) + .env("HOME", temp.path()) + .output() + .unwrap(); + + assert!( + output.status.success(), + "bare invocation should run doctor when config exists: stderr={}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("Environment")); + assert!(stdout.contains("Configuration")); + assert!(stdout.contains("Agents detected")); +} + #[test] fn cli_run_dry_run_resolves_config_and_command() { let temp = tempfile::tempdir().unwrap(); diff --git a/crates/cli/tests/coverage/banner_tests.rs b/crates/cli/tests/coverage/banner_tests.rs index eaf78633..83931c3b 100644 --- a/crates/cli/tests/coverage/banner_tests.rs +++ b/crates/cli/tests/coverage/banner_tests.rs @@ -5,7 +5,7 @@ use super::*; #[test] fn render_frame_settled_contains_figlet_glyphs() { - let frame = render_frame(None, false); + let frame = render_frame(false); // ANSI Shadow figlet uses filled blocks and box-drawing corners. assert!(frame.contains('█'), "frame missing figlet block glyph"); assert!( @@ -16,7 +16,7 @@ fn render_frame_settled_contains_figlet_glyphs() { #[test] fn render_frame_plain_mode_has_no_ansi_escapes() { - let frame = render_frame(None, false); + let frame = render_frame(false); assert!( !frame.contains('\x1b'), "plain mode should emit no ANSI escapes" @@ -25,65 +25,23 @@ fn render_frame_plain_mode_has_no_ansi_escapes() { #[test] fn render_frame_color_mode_emits_nvidia_green() { - let frame = render_frame(None, true); + let frame = render_frame(true); assert!(frame.contains("\x1b[38;5;112m")); assert!(frame.contains("\x1b[0m")); } #[test] -fn render_frame_tracer_overlay_inserts_dot_at_position() { - // Pick a position on the top rail (row 0) that's empty in the static art. - let frame_with = render_frame(Some((0, 14)), true); - let frame_without = render_frame(None, true); +fn docked_frame_has_no_cursor_control_sequences() { + let frame = render_docked_frame(true); assert!( - frame_with.contains('●'), - "tracer should render a `●` head when overlay is active" + !frame.contains("\x1b[?25l") && !frame.contains("\x1b[?25h") && !frame.contains("\x1b7"), + "static banner should not emit animation cursor control sequences" ); - assert!( - !frame_without.contains('●'), - "settled frame (no tracer) should not include the dot glyph" - ); -} - -#[test] -fn render_frame_tracer_plain_mode_uses_ascii_star() { - let frame = render_frame(Some((0, 14)), false); - assert!( - frame.contains('*'), - "plain mode tracer head should render as `*` (ASCII star)" - ); - assert!( - !frame.contains('●'), - "plain mode should not emit Unicode dot" - ); -} - -#[test] -fn tracer_position_starts_on_top_rail_and_ends_on_bottom_rail() { - let (r0, _c0) = tracer_position(0).expect("frame 0 should have a position"); - assert_eq!(r0, 0, "tracer starts on the top rail"); - - let (r_last, c_last) = - tracer_position(TRACER_FRAMES - 1).expect("last animated frame should have a position"); - assert!( - r_last >= 6, - "tracer should descend to the bottom rail by the last frame" - ); - assert!( - c_last >= 80, - "tracer should travel close to the right edge by the last frame" - ); -} - -#[test] -fn tracer_position_is_none_after_animation_ends() { - assert!(tracer_position(TRACER_FRAMES).is_none()); - assert!(tracer_position(TRACER_FRAMES + 100).is_none()); } #[test] fn frame_is_wrapped_with_rounded_border() { - let frame = render_frame(None, false); + let frame = render_frame(false); // Four corner glyphs and the side bars must appear. assert!(frame.contains('╭'), "missing top-left corner"); assert!(frame.contains('╮'), "missing top-right corner"); diff --git a/crates/cli/tests/coverage/config_tests.rs b/crates/cli/tests/coverage/config_tests.rs index e31208ea..3339e1a7 100644 --- a/crates/cli/tests/coverage/config_tests.rs +++ b/crates/cli/tests/coverage/config_tests.rs @@ -10,8 +10,11 @@ fn config() -> GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai".into(), anthropic_base_url: "http://anthropic".into(), - atif_dir: Some(PathBuf::from("default-atif")), - openinference_endpoint: Some("http://default-otel".into()), + exporters: ExportersConfig { + atif_dir: Some(PathBuf::from("default-atif")), + openinference_endpoint: Some("http://default-otel".into()), + ..Default::default() + }, metadata: None, plugin_config: None, } @@ -47,9 +50,12 @@ fn session_config_prefers_headers_and_parses_json() { let session = config().session_config_from_headers(&headers); - assert_eq!(session.atif_dir, Some(PathBuf::from("header-atif"))); assert_eq!( - session.openinference_endpoint.as_deref(), + session.exporters.atif_dir, + Some(PathBuf::from("header-atif")) + ); + assert_eq!( + session.exporters.openinference_endpoint.as_deref(), Some("http://header-otel") ); assert_eq!(session.profile.as_deref(), Some("profile-a")); @@ -69,9 +75,12 @@ fn session_config_uses_defaults_and_ignores_bad_json() { let session = config().session_config_from_headers(&headers); - assert_eq!(session.atif_dir, Some(PathBuf::from("default-atif"))); assert_eq!( - session.openinference_endpoint.as_deref(), + session.exporters.atif_dir, + Some(PathBuf::from("default-atif")) + ); + assert_eq!( + session.exporters.openinference_endpoint.as_deref(), Some("http://default-otel") ); assert_eq!(session.metadata, None); @@ -115,12 +124,13 @@ fn explicit_toml_config_maps_supported_sections() { openai_base_url = "http://openai" anthropic_base_url = "http://anthropic" -[observability] +[exporters] atif_dir = "atif" -metadata = { team = "obs" } +atof_dir = "atof" +openinference_endpoint = "http://otel" -[export.openinference] -endpoint = "http://otel" +[observability] +metadata = { team = "obs" } [plugins] config = { components = [] } @@ -146,6 +156,9 @@ command = "hermes --yolo chat" openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -159,9 +172,16 @@ command = "hermes --yolo chat" assert_eq!(resolved.gateway.bind.to_string(), "127.0.0.1:0"); assert_eq!(resolved.gateway.openai_base_url, "http://openai"); assert_eq!(resolved.gateway.anthropic_base_url, "http://anthropic"); - assert_eq!(resolved.gateway.atif_dir, Some(PathBuf::from("atif"))); assert_eq!( - resolved.gateway.openinference_endpoint.as_deref(), + resolved.gateway.exporters.atif_dir, + Some(PathBuf::from("atif")) + ); + assert_eq!( + resolved.gateway.exporters.atof_dir, + Some(PathBuf::from("atof")) + ); + assert_eq!( + resolved.gateway.exporters.openinference_endpoint.as_deref(), Some("http://otel") ); assert_eq!(resolved.gateway.metadata, Some(json!({ "team": "obs" }))); @@ -198,6 +218,7 @@ metadata = { team = "file" } openai_base_url: Some("http://cli-openai".into()), anthropic_base_url: None, atif_dir: Some(PathBuf::from("cli-atif")), + atof_dir: None, openinference_endpoint: None, session_metadata: Some(r#"{"team":"cli"}"#.into()), plugin_config: None, @@ -209,7 +230,10 @@ metadata = { team = "file" } let resolved = resolve_run_config(&command, None).unwrap(); assert_eq!(resolved.gateway.openai_base_url, "http://cli-openai"); - assert_eq!(resolved.gateway.atif_dir, Some(PathBuf::from("cli-atif"))); + assert_eq!( + resolved.gateway.exporters.atif_dir, + Some(PathBuf::from("cli-atif")) + ); assert_eq!(resolved.gateway.metadata, Some(json!({ "team": "cli" }))); } @@ -236,6 +260,9 @@ openai_base_url = "http://file-openai" openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -257,6 +284,7 @@ fn server_resolution_applies_all_server_overrides() { openai_base_url: Some("http://cli-openai".into()), anthropic_base_url: Some("http://cli-anthropic".into()), atif_dir: Some(PathBuf::from("cli-atif")), + atof_dir: None, openinference_endpoint: Some("http://cli-otel".into()), }; @@ -265,9 +293,12 @@ fn server_resolution_applies_all_server_overrides() { assert_eq!(resolved.gateway.bind.to_string(), "127.0.0.1:0"); assert_eq!(resolved.gateway.openai_base_url, "http://cli-openai"); assert_eq!(resolved.gateway.anthropic_base_url, "http://cli-anthropic"); - assert_eq!(resolved.gateway.atif_dir, Some(PathBuf::from("cli-atif"))); assert_eq!( - resolved.gateway.openinference_endpoint.as_deref(), + resolved.gateway.exporters.atif_dir, + Some(PathBuf::from("cli-atif")) + ); + assert_eq!( + resolved.gateway.exporters.openinference_endpoint.as_deref(), Some("http://cli-otel") ); } @@ -280,6 +311,7 @@ fn run_resolution_applies_all_run_overrides() { openai_base_url: Some("http://run-openai".into()), anthropic_base_url: Some("http://run-anthropic".into()), atif_dir: Some(PathBuf::from("run-atif")), + atof_dir: None, openinference_endpoint: Some("http://run-otel".into()), session_metadata: Some(r#"{"team":"run"}"#.into()), plugin_config: Some(r#"{"components":["x"]}"#.into()), @@ -292,9 +324,12 @@ fn run_resolution_applies_all_run_overrides() { assert_eq!(resolved.gateway.openai_base_url, "http://run-openai"); assert_eq!(resolved.gateway.anthropic_base_url, "http://run-anthropic"); - assert_eq!(resolved.gateway.atif_dir, Some(PathBuf::from("run-atif"))); assert_eq!( - resolved.gateway.openinference_endpoint.as_deref(), + resolved.gateway.exporters.atif_dir, + Some(PathBuf::from("run-atif")) + ); + assert_eq!( + resolved.gateway.exporters.openinference_endpoint.as_deref(), Some("http://run-otel") ); assert_eq!(resolved.gateway.metadata, Some(json!({ "team": "run" }))); diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs index d0effda8..a7c93635 100644 --- a/crates/cli/tests/coverage/doctor_tests.rs +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -2,12 +2,14 @@ // SPDX-License-Identifier: Apache-2.0 use super::*; +use crate::config::ExportersConfig; use std::path::PathBuf; fn empty_report() -> DoctorReport { DoctorReport { schema_version: 1, binary_version: "0.0.0-test", + target_agent: None, environment: EnvironmentInfo { os: "macos 25.3.0".into(), arch: "aarch64", @@ -17,19 +19,23 @@ fn empty_report() -> DoctorReport { workspace: ConfigLayer { path: PathBuf::from("/x/.nemo-flow/config.toml"), status: Status::Info, + active: false, details: "not present".into(), }, global: ConfigLayer { path: PathBuf::from("/x/.config/nemo-flow/config.toml"), status: Status::Info, + active: false, details: "not present".into(), }, system: ConfigLayer { path: PathBuf::from("/etc/nemo-flow/config.toml"), status: Status::Info, + active: false, details: "not present".into(), }, default_agent: None, + configured_agents: vec![], }, agents: vec![], observability: vec![], @@ -73,6 +79,21 @@ fn exit_code_fails_when_workspace_config_is_invalid() { assert_eq!(exit_code(&report), 1); } +#[test] +fn exit_code_fails_when_agent_readiness_fails() { + let mut report = empty_report(); + report.agents.push(AgentInfo { + name: "codex", + status: Status::Fail, + configured: true, + command: "codex".into(), + path: None, + version: None, + annotation: "configured command not found on $PATH".into(), + }); + assert_eq!(exit_code(&report), 1); +} + #[test] fn format_human_emits_fixed_section_order() { let report = empty_report(); @@ -103,6 +124,38 @@ fn format_human_reports_all_checks_passed_on_clean_report() { assert!(!rendered.contains("warnings")); } +#[test] +fn format_human_uses_symbols_for_agent_statuses() { + let mut report = empty_report(); + report.agents = vec![ + AgentInfo { + name: "claude", + status: Status::Pass, + configured: true, + command: "claude".into(), + path: Some(PathBuf::from("/bin/claude")), + version: Some("1.0.0".into()), + annotation: "hooks: injected during run".into(), + }, + AgentInfo { + name: "codex", + status: Status::Info, + configured: false, + command: "codex".into(), + path: None, + version: None, + annotation: "not configured".into(), + }, + ]; + + let rendered = format_human(&report); + + assert!(rendered.contains(" ✓ claude")); + assert!(rendered.contains(" · codex")); + assert!(!rendered.contains(" pass ")); + assert!(!rendered.contains(" info ")); +} + #[test] fn format_human_reports_failure_summary_when_anything_failed() { let mut report = empty_report(); @@ -140,24 +193,61 @@ fn format_json_is_stable_and_versioned() { let parsed: serde_json::Value = serde_json::from_str(&json).unwrap(); // schema_version pins the wire format. Bump only on breaking renames/removals. assert_eq!(parsed["schema_version"], 1); + assert!(parsed["target_agent"].is_null()); assert!(parsed["environment"]["os"].is_string()); assert!(parsed["agents"].is_array()); } +#[test] +fn check_dir_writable_does_not_create_missing_dir() { + let temp = tempfile::tempdir().unwrap(); + let missing = temp.path().join("missing-atif"); + + assert!(check_dir_writable(&missing).is_err()); + assert!( + !missing.exists(), + "doctor should not create missing ATIF directories while probing" + ); +} + +#[tokio::test] +async fn collect_observability_warns_for_missing_atif_dir_without_creating_it() { + let temp = tempfile::tempdir().unwrap(); + let missing = temp.path().join("missing-atif"); + let gateway = GatewayConfig { + exporters: ExportersConfig { + atif_dir: Some(missing.clone()), + ..Default::default() + }, + ..GatewayConfig::default() + }; + + let checks = collect_observability(&gateway).await; + + assert_eq!(checks[0].status, Status::Warn); + assert!(!missing.exists()); +} + #[test] fn format_agents_human_lists_supported_and_separates_detected() { let agents = vec![ AgentInfo { name: "claude", + status: Status::Pass, + configured: true, + command: "claude".into(), path: Some(PathBuf::from("/opt/homebrew/bin/claude")), version: Some("2.1.4".into()), - annotation: String::new(), + annotation: "hooks: injected during run".into(), }, AgentInfo { name: "codex", + status: Status::Info, + configured: false, + command: "codex".into(), path: None, version: None, - annotation: String::new(), + annotation: "not configured".into(), }, ]; let rendered = format_agents_human(&agents); @@ -176,14 +266,20 @@ fn format_agents_human_lists_supported_and_separates_detected() { fn format_agents_json_matches_doctor_agents_shape() { let agents = vec![AgentInfo { name: "claude", + status: Status::Pass, + configured: true, + command: "claude".into(), path: Some(PathBuf::from("/opt/homebrew/bin/claude")), version: Some("2.1.4".into()), - annotation: String::new(), + annotation: "hooks: injected during run".into(), }]; let json = format_agents_json(&agents).unwrap(); let parsed: serde_json::Value = serde_json::from_str(&json).unwrap(); assert!(parsed.is_array()); assert_eq!(parsed[0]["name"], "claude"); + assert_eq!(parsed[0]["status"], "pass"); + assert_eq!(parsed[0]["configured"], true); + assert_eq!(parsed[0]["command"], "claude"); assert_eq!(parsed[0]["version"], "2.1.4"); assert_eq!(parsed[0]["path"], "/opt/homebrew/bin/claude"); } diff --git a/crates/cli/tests/coverage/gateway_tests.rs b/crates/cli/tests/coverage/gateway_tests.rs index 3b15abe9..abcf2969 100644 --- a/crates/cli/tests/coverage/gateway_tests.rs +++ b/crates/cli/tests/coverage/gateway_tests.rs @@ -2,7 +2,7 @@ // SPDX-License-Identifier: Apache-2.0 use super::*; -use crate::config::GatewayConfig; +use crate::config::{ExportersConfig, GatewayConfig}; use crate::server::AppState; use crate::session::SessionManager; use axum::body::Body; @@ -82,8 +82,7 @@ fn provider_routes_preserve_path_query_and_choose_upstream() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai/".into(), anthropic_base_url: "http://anthropic/".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -348,8 +347,7 @@ async fn passthrough_rejects_unsupported_provider_path_directly() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai".into(), anthropic_base_url: "http://anthropic".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -375,8 +373,7 @@ async fn models_rejects_non_get_requests_directly() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai".into(), anthropic_base_url: "http://anthropic".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; diff --git a/crates/cli/tests/coverage/launcher_tests.rs b/crates/cli/tests/coverage/launcher_tests.rs index 861b96bb..14358b7a 100644 --- a/crates/cli/tests/coverage/launcher_tests.rs +++ b/crates/cli/tests/coverage/launcher_tests.rs @@ -18,6 +18,9 @@ fn infers_agent_from_command_or_uses_override() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -53,6 +56,9 @@ fn uses_configured_command_when_no_argv_is_supplied() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -82,6 +88,9 @@ fn uses_configured_hermes_command_when_no_argv_is_supplied() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -104,6 +113,9 @@ fn inference_failure_has_actionable_message() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -131,6 +143,9 @@ fn missing_command_without_agent_errors() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -156,6 +171,9 @@ fn agent_without_configured_command_falls_back_to_default_binary() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -179,6 +197,9 @@ fn agent_with_passthrough_args_appends_to_configured_command() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -565,6 +586,9 @@ async fn run_starts_gateway_injects_env_and_returns_agent_exit_code() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, @@ -608,6 +632,9 @@ async fn dry_run_does_not_spawn_agent() { openai_base_url: None, anthropic_base_url: None, atif_dir: None, + + atof_dir: None, + openinference_endpoint: None, session_metadata: None, plugin_config: None, diff --git a/crates/cli/tests/coverage/server_tests.rs b/crates/cli/tests/coverage/server_tests.rs index ad68064f..2aef6bf7 100644 --- a/crates/cli/tests/coverage/server_tests.rs +++ b/crates/cli/tests/coverage/server_tests.rs @@ -13,6 +13,7 @@ use tokio::task::JoinHandle; use tower::ServiceExt; use super::*; +use crate::config::ExportersConfig; use crate::error::CliError; struct TestServer { @@ -37,8 +38,7 @@ fn test_config() -> GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, } diff --git a/crates/cli/tests/coverage/session_tests.rs b/crates/cli/tests/coverage/session_tests.rs index 3f218980..c079c01f 100644 --- a/crates/cli/tests/coverage/session_tests.rs +++ b/crates/cli/tests/coverage/session_tests.rs @@ -5,6 +5,7 @@ use axum::http::HeaderMap; use serde_json::json; use super::*; +use crate::config::ExportersConfig; use crate::model::{LlmEvent, LlmHintEvent, SessionEvent, ToolEvent}; #[tokio::test] @@ -13,8 +14,7 @@ async fn nests_agent_subagent_and_tool_lifecycle() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -89,8 +89,7 @@ async fn writes_atif_on_session_end_from_header_config() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -152,8 +151,10 @@ async fn duplicate_agent_end_does_not_overwrite_atif_with_empty_session() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: Some(temp.path().to_path_buf()), - openinference_endpoint: None, + exporters: ExportersConfig { + atif_dir: Some(temp.path().to_path_buf()), + ..Default::default() + }, metadata: None, plugin_config: None, }; @@ -228,8 +229,7 @@ async fn writes_hermes_api_hook_usage_to_atif_metrics() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -307,8 +307,7 @@ async fn handles_out_of_order_subagent_and_tool_end_events() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -359,7 +358,7 @@ async fn handles_out_of_order_subagent_and_tool_end_events() { async fn terminal_retry_for_unknown_session_is_ignored() { let temp = tempfile::tempdir().unwrap(); let mut config = session_test_config(); - config.atif_dir = Some(temp.path().to_path_buf()); + config.exporters.atif_dir = Some(temp.path().to_path_buf()); let manager = SessionManager::new(config); manager @@ -386,8 +385,7 @@ async fn out_of_order_started_subagent_end_does_not_leak_scope() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -458,8 +456,7 @@ async fn agent_end_closes_nested_active_subagents_lifo() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -514,8 +511,7 @@ async fn llm_lifecycle_starts_implicit_gateway_session() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -558,7 +554,7 @@ async fn llm_lifecycle_starts_implicit_gateway_session() { async fn agent_end_closes_in_flight_gateway_llm() { let temp = tempfile::tempdir().unwrap(); let mut config = session_test_config(); - config.atif_dir = Some(temp.path().to_path_buf()); + config.exporters.atif_dir = Some(temp.path().to_path_buf()); let manager = SessionManager::new(config); let _active = manager .start_llm( @@ -607,8 +603,7 @@ async fn llm_lifecycle_uses_single_active_hook_session_when_header_is_missing() bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -664,8 +659,7 @@ async fn single_pending_llm_hint_claims_next_gateway_llm() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -761,8 +755,7 @@ async fn multiple_llm_hints_resolve_by_generation_id() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -876,8 +869,7 @@ async fn ambiguous_llm_hints_fall_back_to_agent_scope() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -975,8 +967,7 @@ async fn no_active_hint_reuses_last_llm_owner() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, }; @@ -1083,7 +1074,7 @@ async fn no_active_hint_reuses_last_llm_owner() { async fn session_marks_cover_compaction_notifications_and_hook_marks() { let temp = tempfile::tempdir().unwrap(); let mut config = session_test_config(); - config.atif_dir = Some(temp.path().to_path_buf()); + config.exporters.atif_dir = Some(temp.path().to_path_buf()); let manager = SessionManager::new(config); let headers = HeaderMap::new(); @@ -1635,8 +1626,7 @@ fn session_test_config() -> GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: None, - openinference_endpoint: None, + exporters: ExportersConfig::default(), metadata: None, plugin_config: None, } @@ -1653,8 +1643,10 @@ async fn gateway_first_anthropic_call_labels_session_as_claude_code() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: Some(temp.path().to_path_buf()), - openinference_endpoint: None, + exporters: ExportersConfig { + atif_dir: Some(temp.path().to_path_buf()), + ..Default::default() + }, metadata: None, plugin_config: None, }; @@ -1701,8 +1693,10 @@ async fn gateway_first_openai_responses_call_labels_session_as_codex() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: Some(temp.path().to_path_buf()), - openinference_endpoint: None, + exporters: ExportersConfig { + atif_dir: Some(temp.path().to_path_buf()), + ..Default::default() + }, metadata: None, plugin_config: None, }; @@ -1745,8 +1739,10 @@ async fn synthetic_gateway_session_keeps_gateway_label() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: Some(temp.path().to_path_buf()), - openinference_endpoint: None, + exporters: ExportersConfig { + atif_dir: Some(temp.path().to_path_buf()), + ..Default::default() + }, metadata: None, plugin_config: None, }; @@ -1791,8 +1787,10 @@ async fn turn_ended_snapshots_atif_without_closing_scope() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: Some(temp.path().to_path_buf()), - openinference_endpoint: None, + exporters: ExportersConfig { + atif_dir: Some(temp.path().to_path_buf()), + ..Default::default() + }, metadata: None, plugin_config: None, }; @@ -1873,8 +1871,10 @@ async fn turn_ended_is_noop_for_session_with_no_agent_scope() { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), - atif_dir: Some(temp.path().to_path_buf()), - openinference_endpoint: None, + exporters: ExportersConfig { + atif_dir: Some(temp.path().to_path_buf()), + ..Default::default() + }, metadata: None, plugin_config: None, }; diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index c308a5d1..7aa01692 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -75,7 +75,7 @@ fn detect_installed_agents_finds_binaries_on_path() { } #[test] -fn build_config_emits_observability_section_when_atif_selected() { +fn build_config_emits_exporters_section_when_atif_selected() { let answers = SetupAnswers { scope: ConfigScope::Project, agents: vec![], @@ -88,13 +88,14 @@ fn build_config_emits_observability_section_when_atif_selected() { let doc = build_config(&answers); let rendered = doc.to_string(); - assert!(rendered.contains("[observability]")); + assert!(rendered.contains("[exporters]")); assert!(rendered.contains(r#"atif_dir = "./atif""#)); - assert!(!rendered.contains("[export")); + assert!(!rendered.contains("[export.")); + assert!(!rendered.contains("[observability]")); } #[test] -fn build_config_emits_export_section_when_openinference_selected() { +fn build_config_emits_exporters_section_when_openinference_selected() { let answers = SetupAnswers { scope: ConfigScope::Project, agents: vec![], @@ -107,8 +108,10 @@ fn build_config_emits_export_section_when_openinference_selected() { let doc = build_config(&answers); let rendered = doc.to_string(); - assert!(rendered.contains("[export.openinference]")); - assert!(rendered.contains(r#"endpoint = "http://localhost:6006/v1/traces""#)); + assert!(rendered.contains("[exporters]")); + assert!(rendered.contains(r#"openinference_endpoint = "http://localhost:6006/v1/traces""#)); + assert!(!rendered.contains("[export.")); + assert!(!rendered.contains("[observability]")); } #[test] @@ -125,6 +128,7 @@ fn build_config_skips_empty_sections_when_no_backends_selected() { let doc = build_config(&answers); let rendered = doc.to_string(); + assert!(!rendered.contains("[exporters]")); assert!(!rendered.contains("[observability]")); assert!(!rendered.contains("[export")); assert!(!rendered.contains("[agents]")); @@ -199,7 +203,7 @@ fn save_config_writes_project_scope_to_workspace_dir() { assert_eq!(written.len(), 1); assert_eq!(written[0], temp.path().join(".nemo-flow/config.toml")); let contents = std::fs::read_to_string(&written[0]).unwrap(); - assert!(contents.contains("[observability]")); + assert!(contents.contains("[exporters]")); assert!(contents.contains("[agents.claude]")); } @@ -247,7 +251,7 @@ command = "codex --full-auto" let merged = std::fs::read_to_string(&existing_path).unwrap(); // Wizard-owned sections are replaced with the new doc's content. - assert!(merged.contains("[observability]")); + assert!(merged.contains("[exporters]")); assert!(merged.contains("[agents.claude]")); assert!(merged.contains(r#"command = "claude""#)); // Other agents (not touched by this scoped run) survive. diff --git a/integrations/coding-agents/README.md b/integrations/coding-agents/README.md index c4b3635f..5c4ca9ff 100644 --- a/integrations/coding-agents/README.md +++ b/integrations/coding-agents/README.md @@ -54,6 +54,19 @@ Use `--agent claude|codex|cursor|hermes` when a wrapper hides the agent command name. Use `--dry-run --print` to inspect generated config without launching. +Use `nemo-flow doctor` to inspect environment, config, agent commands, hook +readiness, observability outputs, and shell completions. Scope the report to one +agent when troubleshooting launch readiness: + +```bash +nemo-flow doctor +nemo-flow doctor codex +nemo-flow doctor hermes --json +``` + +The command is read-only: it reports missing ATIF directories, hook files, and +agent commands instead of creating or patching them. + Hermes transparent runs export the dynamic `NEMO_FLOW_GATEWAY_URL`, but Hermes hooks must already be present in `.hermes/config.yaml` before they can call the gateway. The setup wizard (`nemo-flow config`) writes that file for you when @@ -65,12 +78,12 @@ project `.nemo-flow/config.toml`, then `~/.config/nemo-flow/config.toml`. ```toml -[observability] +[exporters] atif_dir = ".nemo-flow/atif" -metadata = { team = "agent-observability" } +openinference_endpoint = "http://127.0.0.1:4318/v1/traces" -[export.openinference] -endpoint = "http://127.0.0.1:4318/v1/traces" +[observability] +metadata = { team = "agent-observability" } [agents.codex] command = "codex" From 09eba71bc1d23acf12c4e83d0e1b6e85cceb0b67 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 12:42:16 -0700 Subject: [PATCH 10/15] Nest CLI exporter config Signed-off-by: Ajay Thorve --- crates/cli/src/config.rs | 148 +++++++++++++++------ crates/cli/src/doctor.rs | 6 +- crates/cli/src/launcher.rs | 26 +++- crates/cli/src/session.rs | 30 ++++- crates/cli/src/setup.rs | 41 ++++-- crates/cli/tests/coverage/config_tests.rs | 51 ++++--- crates/cli/tests/coverage/doctor_tests.rs | 6 +- crates/cli/tests/coverage/session_tests.rs | 83 ++++++++++-- crates/cli/tests/coverage/setup_tests.rs | 25 +++- integrations/coding-agents/README.md | 13 +- 10 files changed, 331 insertions(+), 98 deletions(-) diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index 76393920..4c20a232 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -6,6 +6,7 @@ use std::path::PathBuf; use axum::http::HeaderMap; use clap::{Args, Parser, Subcommand, ValueEnum}; +use nemo_flow::observability::atof::AtofExporterMode; use serde::Deserialize; use serde_json::Value; @@ -196,21 +197,41 @@ pub(crate) struct GatewayConfig { pub(crate) plugin_config: Option, } -/// Sinks the gateway writes observability data to. Grouped because every layer (CLI flags, env, -/// TOML, headers, session resolution) historically duplicated `atif_dir` / `openinference_endpoint` -/// side-by-side; adding `atof_dir` doubled that plumbing. This struct is the single seat where -/// exporter knobs live on the runtime model — flat CLI flags and TOML keys still exist for -/// ergonomics, but they all funnel into here. -/// -/// `atif_dir` — directory for per-session ATIF trajectory JSON files (one file per session). -/// `atof_dir` — directory for per-event ATOF JSONL streams (one event per line, raw event shape). -/// `openinference_endpoint` — OTLP HTTP endpoint for streaming OpenInference spans -/// (Phoenix / Arize / OTLP-compatible). +/// Sinks the gateway writes observability data to. Each exporter has its own nested config so +/// exporter-specific options (for example ATOF append/overwrite behavior) do not get flattened +/// into unrelated backends. #[derive(Debug, Clone, Default)] pub(crate) struct ExportersConfig { - pub(crate) atif_dir: Option, - pub(crate) atof_dir: Option, - pub(crate) openinference_endpoint: Option, + pub(crate) atif: AtifExporterSettings, + pub(crate) atof: AtofExporterSettings, + pub(crate) openinference: OpenInferenceExporterSettings, +} + +#[derive(Debug, Clone, Default)] +pub(crate) struct AtifExporterSettings { + pub(crate) dir: Option, +} + +#[derive(Debug, Clone)] +pub(crate) struct AtofExporterSettings { + pub(crate) dir: Option, + pub(crate) mode: AtofExporterMode, + pub(crate) filename_template: String, +} + +impl Default for AtofExporterSettings { + fn default() -> Self { + Self { + dir: None, + mode: AtofExporterMode::Append, + filename_template: "{session_id}.jsonl".into(), + } + } +} + +#[derive(Debug, Clone, Default)] +pub(crate) struct OpenInferenceExporterSettings { + pub(crate) endpoint: Option, } #[derive(Debug, Clone, Args)] @@ -314,14 +335,21 @@ impl GatewayConfig { // because install and hook-forward validate generated header values before sending them. pub(crate) fn session_config_from_headers(&self, headers: &HeaderMap) -> SessionConfig { let exporters = ExportersConfig { - atif_dir: header_string(headers, "x-nemo-flow-atif-dir") - .map(PathBuf::from) - .or_else(|| self.exporters.atif_dir.clone()), - atof_dir: header_string(headers, "x-nemo-flow-atof-dir") - .map(PathBuf::from) - .or_else(|| self.exporters.atof_dir.clone()), - openinference_endpoint: header_string(headers, "x-nemo-flow-openinference-endpoint") - .or_else(|| self.exporters.openinference_endpoint.clone()), + atif: AtifExporterSettings { + dir: header_string(headers, "x-nemo-flow-atif-dir") + .map(PathBuf::from) + .or_else(|| self.exporters.atif.dir.clone()), + }, + atof: AtofExporterSettings { + dir: header_string(headers, "x-nemo-flow-atof-dir") + .map(PathBuf::from) + .or_else(|| self.exporters.atof.dir.clone()), + ..self.exporters.atof.clone() + }, + openinference: OpenInferenceExporterSettings { + endpoint: header_string(headers, "x-nemo-flow-openinference-endpoint") + .or_else(|| self.exporters.openinference.endpoint.clone()), + }, }; let metadata = header_json(headers, "x-nemo-flow-session-metadata").or_else(|| self.metadata.clone()); @@ -408,11 +436,27 @@ struct FileObservabilityConfig { #[derive(Debug, Clone, Default, Deserialize)] struct FileExportersConfig { + atif: Option, + atof: Option, + openinference: Option, + // Legacy flat `[exporters]` keys from early CLI builds. atif_dir: Option, atof_dir: Option, openinference_endpoint: Option, } +#[derive(Debug, Clone, Default, Deserialize)] +struct FileAtifExporterConfig { + dir: Option, +} + +#[derive(Debug, Clone, Default, Deserialize)] +struct FileAtofExporterConfig { + dir: Option, + mode: Option, + filename_template: Option, +} + // Legacy `[export.]` shape. New configs use `[exporters]`; this stays readable so // existing user files do not break. #[derive(Debug, Clone, Default, Deserialize)] @@ -525,13 +569,13 @@ fn apply_run_url_overrides(config: &mut GatewayConfig, command: &RunCommand) { config.anthropic_base_url = value.clone(); } if let Some(value) = &command.atif_dir { - config.exporters.atif_dir = Some(value.clone()); + config.exporters.atif.dir = Some(value.clone()); } if let Some(value) = &command.atof_dir { - config.exporters.atof_dir = Some(value.clone()); + config.exporters.atof.dir = Some(value.clone()); } if let Some(value) = &command.openinference_endpoint { - config.exporters.openinference_endpoint = Some(value.clone()); + config.exporters.openinference.endpoint = Some(value.clone()); } } @@ -563,13 +607,13 @@ fn apply_server_overrides(config: &mut GatewayConfig, args: &ServerArgs) { config.anthropic_base_url = value.clone(); } if let Some(value) = &args.atif_dir { - config.exporters.atif_dir = Some(value.clone()); + config.exporters.atif.dir = Some(value.clone()); } if let Some(value) = &args.atof_dir { - config.exporters.atof_dir = Some(value.clone()); + config.exporters.atof.dir = Some(value.clone()); } if let Some(value) = &args.openinference_endpoint { - config.exporters.openinference_endpoint = Some(value.clone()); + config.exporters.openinference.endpoint = Some(value.clone()); } } @@ -663,7 +707,7 @@ fn apply_file_config(resolved: &mut ResolvedConfig, value: toml::Value) -> Resul apply_file_upstream_config(&mut resolved.gateway, config.upstream); apply_file_observability_config(&mut resolved.gateway, config.observability); apply_file_export_config(&mut resolved.gateway, config.export); - apply_file_exporters_config(&mut resolved.gateway, config.exporters); + apply_file_exporters_config(&mut resolved.gateway, config.exporters)?; apply_file_plugins_config(&mut resolved.gateway, config.plugins); apply_file_agents_config(&mut resolved.agents, config.agents); Ok(()) @@ -694,10 +738,10 @@ fn apply_file_observability_config( return; }; if let Some(value) = observability.atif_dir { - gateway.exporters.atif_dir = Some(value); + gateway.exporters.atif.dir = Some(value); } if let Some(value) = observability.atof_dir { - gateway.exporters.atof_dir = Some(value); + gateway.exporters.atof.dir = Some(value); } if let Some(value) = observability.metadata { gateway.metadata = Some(value); @@ -712,7 +756,7 @@ fn apply_file_export_config(gateway: &mut GatewayConfig, export: Option, -) { +) -> Result<(), CliError> { let Some(exporters) = exporters else { - return; + return Ok(()); }; if let Some(value) = exporters.atif_dir { - gateway.exporters.atif_dir = Some(value); + gateway.exporters.atif.dir = Some(value); } if let Some(value) = exporters.atof_dir { - gateway.exporters.atof_dir = Some(value); + gateway.exporters.atof.dir = Some(value); } if let Some(value) = exporters.openinference_endpoint { - gateway.exporters.openinference_endpoint = Some(value); + gateway.exporters.openinference.endpoint = Some(value); } + if let Some(atif) = exporters.atif + && let Some(value) = atif.dir + { + gateway.exporters.atif.dir = Some(value); + } + if let Some(atof) = exporters.atof { + if let Some(value) = atof.dir { + gateway.exporters.atof.dir = Some(value); + } + if let Some(value) = atof.mode { + gateway.exporters.atof.mode = AtofExporterMode::parse(&value).ok_or_else(|| { + CliError::Config(format!( + "invalid [exporters.atof].mode `{value}`; expected append or overwrite" + )) + })?; + } + if let Some(value) = atof.filename_template { + gateway.exporters.atof.filename_template = value; + } + } + if let Some(openinference) = exporters.openinference + && let Some(value) = openinference.endpoint + { + gateway.exporters.openinference.endpoint = Some(value); + } + Ok(()) } // Applies plugin config. Reserved for the plugin runtime — stored on `GatewayConfig.plugin_config` @@ -787,13 +857,13 @@ fn apply_env_config(config: &mut GatewayConfig) { config.anthropic_base_url = value; } if let Some(value) = std::env::var_os("NEMO_FLOW_ATIF_DIR") { - config.exporters.atif_dir = Some(PathBuf::from(value)); + config.exporters.atif.dir = Some(PathBuf::from(value)); } if let Some(value) = std::env::var_os("NEMO_FLOW_ATOF_DIR") { - config.exporters.atof_dir = Some(PathBuf::from(value)); + config.exporters.atof.dir = Some(PathBuf::from(value)); } if let Ok(value) = std::env::var("NEMO_FLOW_OPENINFERENCE_ENDPOINT") { - config.exporters.openinference_endpoint = Some(value); + config.exporters.openinference.endpoint = Some(value); } } diff --git a/crates/cli/src/doctor.rs b/crates/cli/src/doctor.rs index 429c9da1..6d365889 100644 --- a/crates/cli/src/doctor.rs +++ b/crates/cli/src/doctor.rs @@ -441,7 +441,7 @@ async fn probe_version(binary: &Path) -> Option { async fn collect_observability(gateway: &GatewayConfig) -> Vec { let mut checks = Vec::new(); - checks.push(match &gateway.exporters.atif_dir { + checks.push(match &gateway.exporters.atif.dir { None => Check { name: "ATIF dir", status: Status::Info, @@ -469,7 +469,7 @@ async fn collect_observability(gateway: &GatewayConfig) -> Vec { }, }); - checks.push(match &gateway.exporters.atof_dir { + checks.push(match &gateway.exporters.atof.dir { None => Check { name: "ATOF dir", status: Status::Info, @@ -497,7 +497,7 @@ async fn collect_observability(gateway: &GatewayConfig) -> Vec { }, }); - checks.push(match &gateway.exporters.openinference_endpoint { + checks.push(match &gateway.exporters.openinference.endpoint { None => Check { name: "OpenInference endpoint", status: Status::Info, diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index 38b95c4f..b4140ed3 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -511,15 +511,19 @@ impl PreparedRun { let mut lines: Vec = Vec::new(); lines.push(format!("NeMo Flow → {}", agent.as_arg())); lines.push(format!(" Gateway {gateway_url}")); - match &resolved.gateway.exporters.atif_dir { + match &resolved.gateway.exporters.atif.dir { Some(path) => lines.push(format!(" ATIF {}", path.display())), None => lines.push(" ATIF (disabled)".to_string()), } - match &resolved.gateway.exporters.atof_dir { - Some(path) => lines.push(format!(" ATOF {}", path.display())), + match &resolved.gateway.exporters.atof.dir { + Some(path) => lines.push(format!( + " ATOF {} ({})", + path.display(), + resolved.gateway.exporters.atof.mode.as_str() + )), None => lines.push(" ATOF (disabled)".to_string()), } - match &resolved.gateway.exporters.openinference_endpoint { + match &resolved.gateway.exporters.openinference.endpoint { Some(endpoint) => lines.push(format!(" OpenInference {endpoint}")), None => lines.push(" OpenInference (disabled)".to_string()), } @@ -562,13 +566,21 @@ impl PreparedRun { "anthropic_base_url = {}", resolved.gateway.anthropic_base_url ); - if let Some(path) = &resolved.gateway.exporters.atif_dir { + if let Some(path) = &resolved.gateway.exporters.atif.dir { println!("atif_dir = {}", path.display()); } - if let Some(path) = &resolved.gateway.exporters.atof_dir { + if let Some(path) = &resolved.gateway.exporters.atof.dir { println!("atof_dir = {}", path.display()); + println!( + "atof_mode = {}", + resolved.gateway.exporters.atof.mode.as_str() + ); + println!( + "atof_filename_template = {}", + resolved.gateway.exporters.atof.filename_template + ); } - if let Some(endpoint) = &resolved.gateway.exporters.openinference_endpoint { + if let Some(endpoint) = &resolved.gateway.exporters.openinference.endpoint { println!("openinference_endpoint = {endpoint}"); } println!("argv = {}", self.argv.join(" ")); diff --git a/crates/cli/src/session.rs b/crates/cli/src/session.rs index 2d8e7efd..98cfc11e 100644 --- a/crates/cli/src/session.rs +++ b/crates/cli/src/session.rs @@ -399,7 +399,7 @@ impl Session { if self.agent_scope.is_none() { return Ok(()); } - if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.exporters.atif_dir) { + if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.exporters.atif.dir) { write_atif(directory, &self.session_id, exporter)?; } Ok(()) @@ -544,7 +544,7 @@ impl Session { if self.atof.is_some() { return Ok(()); } - let Some(directory) = self.config.exporters.atof_dir.clone() else { + let Some(directory) = self.config.exporters.atof.dir.clone() else { return Ok(()); }; // Ensure the directory exists; AtofExporter opens the file via OpenOptions which won't @@ -556,9 +556,14 @@ impl Session { directory.display() )) })?; + let filename = render_atof_filename_template( + &self.config.exporters.atof.filename_template, + &self.session_id, + )?; let config = AtofExporterConfig::default() .with_output_directory(directory) - .with_filename(format!("{}.jsonl", self.session_id)); + .with_mode(self.config.exporters.atof.mode) + .with_filename(filename); let exporter = AtofExporter::new(config) .map_err(|err| CliError::Config(format!("could not open ATOF file: {err}")))?; scope_register_subscriber(&root.uuid, "gateway-atof", exporter.subscriber())?; @@ -569,7 +574,7 @@ impl Session { // Registers the ATIF exporter once when a session has ATIF output configured. The exporter keeps // the session agent metadata so downstream trajectory files can be attributed to this run. fn install_atif_observer(&mut self, root: &ScopeHandle) -> Result<(), CliError> { - if self.atif.is_some() || self.config.exporters.atif_dir.is_none() { + if self.atif.is_some() || self.config.exporters.atif.dir.is_none() { return Ok(()); } let exporter = AtifExporter::new( @@ -593,7 +598,7 @@ impl Session { if self.openinference.is_some() { return Ok(()); } - let Some(endpoint) = &self.config.exporters.openinference_endpoint else { + let Some(endpoint) = &self.config.exporters.openinference.endpoint else { return Ok(()); }; let subscriber = OpenInferenceSubscriber::new( @@ -946,7 +951,7 @@ impl Session { subscriber.force_flush()?; subscriber.shutdown()?; } - if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.exporters.atif_dir) { + if let (Some(exporter), Some(directory)) = (&self.atif, &self.config.exporters.atif.dir) { write_atif(directory, &self.session_id, exporter)?; } // ATOF writes per-event JSONL as events arrive; flush + shutdown here just ensure the @@ -1304,6 +1309,19 @@ fn validate_atif_session_id(session_id: &str) -> Result<(), CliError> { Ok(()) } +fn render_atof_filename_template(template: &str, session_id: &str) -> Result { + validate_atif_session_id(session_id)?; + let filename = template.replace("{session_id}", session_id); + let path = std::path::Path::new(&filename); + if filename.is_empty() || filename == "." || filename == ".." || path.components().count() != 1 + { + return Err(CliError::InvalidPayload( + "ATOF filename template must render to a single safe filename".into(), + )); + } + Ok(filename) +} + // Scores how strongly a pending hint matches a gateway LLM request. Subagent/agent identity is // weighted highest, request/conversation/generation identifiers are equal, and model match is only // a low-confidence tie breaker. diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index 73fe6453..658cca7f 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -121,8 +121,8 @@ pub(crate) fn detect_installed_agents_in(path_var: Option<&std::ffi::OsStr>) -> pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { let mut doc = DocumentMut::new(); - // Build the exporter table once so selecting multiple backends produces a single section with - // all enabled sinks, not separate legacy observability/export blocks. + // Build the exporter table once so selecting multiple backends produces nested per-exporter + // sections, not separate legacy observability/export blocks. let want_atif = answers.backends.contains(&ObservabilityBackend::Atif); let want_atof = answers.backends.contains(&ObservabilityBackend::Atof); let want_openinference = answers @@ -132,13 +132,21 @@ pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { if want_atif || want_atof || want_openinference { let mut exporters = Table::new(); if want_atif { - exporters["atif_dir"] = value("./atif"); + let mut atif = Table::new(); + atif["dir"] = value("./atif"); + exporters.insert("atif", Item::Table(atif)); } if want_atof { - exporters["atof_dir"] = value("./atof"); + let mut atof = Table::new(); + atof["dir"] = value("./atof"); + atof["mode"] = value("append"); + atof["filename_template"] = value("{session_id}.jsonl"); + exporters.insert("atof", Item::Table(atof)); } if let Some(endpoint) = answers.openinference_endpoint.as_deref() { - exporters["openinference_endpoint"] = value(endpoint); + let mut openinference = Table::new(); + openinference["endpoint"] = value(endpoint); + exporters.insert("openinference", Item::Table(openinference)); } doc["exporters"] = Item::Table(exporters); } @@ -542,17 +550,34 @@ fn read_existing_defaults() -> Option { Some(Defaults { scope, agents: read_agents_from_doc(&doc), - atif_enabled: exporters.and_then(|t| t.get("atif_dir")).is_some() + atif_enabled: exporters + .and_then(|t| t.get("atif")) + .and_then(|i| i.as_table()) + .and_then(|t| t.get("dir")) + .is_some() + || exporters.and_then(|t| t.get("atif_dir")).is_some() || legacy_observability .and_then(|t| t.get("atif_dir")) .is_some(), - atof_enabled: exporters.and_then(|t| t.get("atof_dir")).is_some() + atof_enabled: exporters + .and_then(|t| t.get("atof")) + .and_then(|i| i.as_table()) + .and_then(|t| t.get("dir")) + .is_some() + || exporters.and_then(|t| t.get("atof_dir")).is_some() || legacy_observability .and_then(|t| t.get("atof_dir")) .is_some(), openinference_endpoint: exporters - .and_then(|t| t.get("openinference_endpoint")) + .and_then(|t| t.get("openinference")) + .and_then(|i| i.as_table()) + .and_then(|t| t.get("endpoint")) .and_then(|i| i.as_str()) + .or_else(|| { + exporters + .and_then(|t| t.get("openinference_endpoint")) + .and_then(|i| i.as_str()) + }) .or_else(|| { legacy_export .and_then(|t| t.get("openinference")) diff --git a/crates/cli/tests/coverage/config_tests.rs b/crates/cli/tests/coverage/config_tests.rs index 3339e1a7..64ab32d9 100644 --- a/crates/cli/tests/coverage/config_tests.rs +++ b/crates/cli/tests/coverage/config_tests.rs @@ -11,8 +11,12 @@ fn config() -> GatewayConfig { openai_base_url: "http://openai".into(), anthropic_base_url: "http://anthropic".into(), exporters: ExportersConfig { - atif_dir: Some(PathBuf::from("default-atif")), - openinference_endpoint: Some("http://default-otel".into()), + atif: AtifExporterSettings { + dir: Some(PathBuf::from("default-atif")), + }, + openinference: OpenInferenceExporterSettings { + endpoint: Some("http://default-otel".into()), + }, ..Default::default() }, metadata: None, @@ -51,11 +55,11 @@ fn session_config_prefers_headers_and_parses_json() { let session = config().session_config_from_headers(&headers); assert_eq!( - session.exporters.atif_dir, + session.exporters.atif.dir, Some(PathBuf::from("header-atif")) ); assert_eq!( - session.exporters.openinference_endpoint.as_deref(), + session.exporters.openinference.endpoint.as_deref(), Some("http://header-otel") ); assert_eq!(session.profile.as_deref(), Some("profile-a")); @@ -76,11 +80,11 @@ fn session_config_uses_defaults_and_ignores_bad_json() { let session = config().session_config_from_headers(&headers); assert_eq!( - session.exporters.atif_dir, + session.exporters.atif.dir, Some(PathBuf::from("default-atif")) ); assert_eq!( - session.exporters.openinference_endpoint.as_deref(), + session.exporters.openinference.endpoint.as_deref(), Some("http://default-otel") ); assert_eq!(session.metadata, None); @@ -124,10 +128,16 @@ fn explicit_toml_config_maps_supported_sections() { openai_base_url = "http://openai" anthropic_base_url = "http://anthropic" -[exporters] -atif_dir = "atif" -atof_dir = "atof" -openinference_endpoint = "http://otel" +[exporters.atif] +dir = "atif" + +[exporters.atof] +dir = "atof" +mode = "overwrite" +filename_template = "{session_id}-events.jsonl" + +[exporters.openinference] +endpoint = "http://otel" [observability] metadata = { team = "obs" } @@ -173,15 +183,20 @@ command = "hermes --yolo chat" assert_eq!(resolved.gateway.openai_base_url, "http://openai"); assert_eq!(resolved.gateway.anthropic_base_url, "http://anthropic"); assert_eq!( - resolved.gateway.exporters.atif_dir, + resolved.gateway.exporters.atif.dir, Some(PathBuf::from("atif")) ); assert_eq!( - resolved.gateway.exporters.atof_dir, + resolved.gateway.exporters.atof.dir, Some(PathBuf::from("atof")) ); + assert_eq!(resolved.gateway.exporters.atof.mode.as_str(), "overwrite"); + assert_eq!( + resolved.gateway.exporters.atof.filename_template, + "{session_id}-events.jsonl" + ); assert_eq!( - resolved.gateway.exporters.openinference_endpoint.as_deref(), + resolved.gateway.exporters.openinference.endpoint.as_deref(), Some("http://otel") ); assert_eq!(resolved.gateway.metadata, Some(json!({ "team": "obs" }))); @@ -231,7 +246,7 @@ metadata = { team = "file" } assert_eq!(resolved.gateway.openai_base_url, "http://cli-openai"); assert_eq!( - resolved.gateway.exporters.atif_dir, + resolved.gateway.exporters.atif.dir, Some(PathBuf::from("cli-atif")) ); assert_eq!(resolved.gateway.metadata, Some(json!({ "team": "cli" }))); @@ -294,11 +309,11 @@ fn server_resolution_applies_all_server_overrides() { assert_eq!(resolved.gateway.openai_base_url, "http://cli-openai"); assert_eq!(resolved.gateway.anthropic_base_url, "http://cli-anthropic"); assert_eq!( - resolved.gateway.exporters.atif_dir, + resolved.gateway.exporters.atif.dir, Some(PathBuf::from("cli-atif")) ); assert_eq!( - resolved.gateway.exporters.openinference_endpoint.as_deref(), + resolved.gateway.exporters.openinference.endpoint.as_deref(), Some("http://cli-otel") ); } @@ -325,11 +340,11 @@ fn run_resolution_applies_all_run_overrides() { assert_eq!(resolved.gateway.openai_base_url, "http://run-openai"); assert_eq!(resolved.gateway.anthropic_base_url, "http://run-anthropic"); assert_eq!( - resolved.gateway.exporters.atif_dir, + resolved.gateway.exporters.atif.dir, Some(PathBuf::from("run-atif")) ); assert_eq!( - resolved.gateway.exporters.openinference_endpoint.as_deref(), + resolved.gateway.exporters.openinference.endpoint.as_deref(), Some("http://run-otel") ); assert_eq!(resolved.gateway.metadata, Some(json!({ "team": "run" }))); diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs index a7c93635..1a9d63e0 100644 --- a/crates/cli/tests/coverage/doctor_tests.rs +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -2,7 +2,7 @@ // SPDX-License-Identifier: Apache-2.0 use super::*; -use crate::config::ExportersConfig; +use crate::config::{AtifExporterSettings, ExportersConfig}; use std::path::PathBuf; fn empty_report() -> DoctorReport { @@ -216,7 +216,9 @@ async fn collect_observability_warns_for_missing_atif_dir_without_creating_it() let missing = temp.path().join("missing-atif"); let gateway = GatewayConfig { exporters: ExportersConfig { - atif_dir: Some(missing.clone()), + atif: AtifExporterSettings { + dir: Some(missing.clone()), + }, ..Default::default() }, ..GatewayConfig::default() diff --git a/crates/cli/tests/coverage/session_tests.rs b/crates/cli/tests/coverage/session_tests.rs index c079c01f..7145eb65 100644 --- a/crates/cli/tests/coverage/session_tests.rs +++ b/crates/cli/tests/coverage/session_tests.rs @@ -2,10 +2,11 @@ // SPDX-License-Identifier: Apache-2.0 use axum::http::HeaderMap; +use nemo_flow::observability::atof::AtofExporterMode; use serde_json::json; use super::*; -use crate::config::ExportersConfig; +use crate::config::{AtifExporterSettings, AtofExporterSettings, ExportersConfig}; use crate::model::{LlmEvent, LlmHintEvent, SessionEvent, ToolEvent}; #[tokio::test] @@ -140,6 +141,56 @@ async fn writes_atif_on_session_end_from_header_config() { assert_eq!(atif["agent"]["name"], json!("codex")); } +#[tokio::test] +async fn writes_atof_with_configured_mode_and_filename_template() { + let temp = tempfile::tempdir().unwrap(); + let output = temp.path().join("custom-atof-mode.jsonl"); + std::fs::write(&output, "{\"existing\":true}\n").unwrap(); + let config = GatewayConfig { + bind: "127.0.0.1:0".parse().unwrap(), + openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), + exporters: ExportersConfig { + atof: AtofExporterSettings { + dir: Some(temp.path().to_path_buf()), + mode: AtofExporterMode::Overwrite, + filename_template: "custom-{session_id}.jsonl".into(), + }, + ..Default::default() + }, + metadata: None, + plugin_config: None, + }; + let manager = SessionManager::new(config); + + manager + .apply_events( + &HeaderMap::new(), + vec![ + NormalizedEvent::AgentStarted(SessionEvent { + session_id: "atof-mode".into(), + agent_kind: AgentKind::Codex, + event_name: "SessionStart".into(), + payload: json!({}), + metadata: json!({}), + }), + NormalizedEvent::AgentEnded(SessionEvent { + session_id: "atof-mode".into(), + agent_kind: AgentKind::Codex, + event_name: "SessionEnd".into(), + payload: json!({}), + metadata: json!({}), + }), + ], + ) + .await + .unwrap(); + + let contents = std::fs::read_to_string(output).unwrap(); + assert!(!contents.contains("existing")); + assert!(contents.contains("atof-mode")); +} + #[tokio::test] async fn duplicate_agent_end_does_not_overwrite_atif_with_empty_session() { // Regression test: hermes-agent and other integrations can emit terminal hooks more than once @@ -152,7 +203,9 @@ async fn duplicate_agent_end_does_not_overwrite_atif_with_empty_session() { openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), exporters: ExportersConfig { - atif_dir: Some(temp.path().to_path_buf()), + atif: AtifExporterSettings { + dir: Some(temp.path().to_path_buf()), + }, ..Default::default() }, metadata: None, @@ -358,7 +411,7 @@ async fn handles_out_of_order_subagent_and_tool_end_events() { async fn terminal_retry_for_unknown_session_is_ignored() { let temp = tempfile::tempdir().unwrap(); let mut config = session_test_config(); - config.exporters.atif_dir = Some(temp.path().to_path_buf()); + config.exporters.atif.dir = Some(temp.path().to_path_buf()); let manager = SessionManager::new(config); manager @@ -554,7 +607,7 @@ async fn llm_lifecycle_starts_implicit_gateway_session() { async fn agent_end_closes_in_flight_gateway_llm() { let temp = tempfile::tempdir().unwrap(); let mut config = session_test_config(); - config.exporters.atif_dir = Some(temp.path().to_path_buf()); + config.exporters.atif.dir = Some(temp.path().to_path_buf()); let manager = SessionManager::new(config); let _active = manager .start_llm( @@ -1074,7 +1127,7 @@ async fn no_active_hint_reuses_last_llm_owner() { async fn session_marks_cover_compaction_notifications_and_hook_marks() { let temp = tempfile::tempdir().unwrap(); let mut config = session_test_config(); - config.exporters.atif_dir = Some(temp.path().to_path_buf()); + config.exporters.atif.dir = Some(temp.path().to_path_buf()); let manager = SessionManager::new(config); let headers = HeaderMap::new(); @@ -1644,7 +1697,9 @@ async fn gateway_first_anthropic_call_labels_session_as_claude_code() { openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), exporters: ExportersConfig { - atif_dir: Some(temp.path().to_path_buf()), + atif: AtifExporterSettings { + dir: Some(temp.path().to_path_buf()), + }, ..Default::default() }, metadata: None, @@ -1694,7 +1749,9 @@ async fn gateway_first_openai_responses_call_labels_session_as_codex() { openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), exporters: ExportersConfig { - atif_dir: Some(temp.path().to_path_buf()), + atif: AtifExporterSettings { + dir: Some(temp.path().to_path_buf()), + }, ..Default::default() }, metadata: None, @@ -1740,7 +1797,9 @@ async fn synthetic_gateway_session_keeps_gateway_label() { openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), exporters: ExportersConfig { - atif_dir: Some(temp.path().to_path_buf()), + atif: AtifExporterSettings { + dir: Some(temp.path().to_path_buf()), + }, ..Default::default() }, metadata: None, @@ -1788,7 +1847,9 @@ async fn turn_ended_snapshots_atif_without_closing_scope() { openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), exporters: ExportersConfig { - atif_dir: Some(temp.path().to_path_buf()), + atif: AtifExporterSettings { + dir: Some(temp.path().to_path_buf()), + }, ..Default::default() }, metadata: None, @@ -1872,7 +1933,9 @@ async fn turn_ended_is_noop_for_session_with_no_agent_scope() { openai_base_url: "http://127.0.0.1".into(), anthropic_base_url: "http://127.0.0.1".into(), exporters: ExportersConfig { - atif_dir: Some(temp.path().to_path_buf()), + atif: AtifExporterSettings { + dir: Some(temp.path().to_path_buf()), + }, ..Default::default() }, metadata: None, diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index 7aa01692..04a1494f 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -89,7 +89,8 @@ fn build_config_emits_exporters_section_when_atif_selected() { let rendered = doc.to_string(); assert!(rendered.contains("[exporters]")); - assert!(rendered.contains(r#"atif_dir = "./atif""#)); + assert!(rendered.contains("[exporters.atif]")); + assert!(rendered.contains(r#"dir = "./atif""#)); assert!(!rendered.contains("[export.")); assert!(!rendered.contains("[observability]")); } @@ -109,11 +110,31 @@ fn build_config_emits_exporters_section_when_openinference_selected() { let rendered = doc.to_string(); assert!(rendered.contains("[exporters]")); - assert!(rendered.contains(r#"openinference_endpoint = "http://localhost:6006/v1/traces""#)); + assert!(rendered.contains("[exporters.openinference]")); + assert!(rendered.contains(r#"endpoint = "http://localhost:6006/v1/traces""#)); assert!(!rendered.contains("[export.")); assert!(!rendered.contains("[observability]")); } +#[test] +fn build_config_emits_atof_write_options_when_atof_selected() { + let answers = SetupAnswers { + scope: ConfigScope::Project, + agents: vec![], + backends: vec![ObservabilityBackend::Atof], + openinference_endpoint: None, + openai_base_url: None, + hermes_hooks_path: None, + }; + + let rendered = build_config(&answers).to_string(); + + assert!(rendered.contains("[exporters.atof]")); + assert!(rendered.contains(r#"dir = "./atof""#)); + assert!(rendered.contains(r#"mode = "append""#)); + assert!(rendered.contains(r#"filename_template = "{session_id}.jsonl""#)); +} + #[test] fn build_config_skips_empty_sections_when_no_backends_selected() { let answers = SetupAnswers { diff --git a/integrations/coding-agents/README.md b/integrations/coding-agents/README.md index 5c4ca9ff..bb6e1c8e 100644 --- a/integrations/coding-agents/README.md +++ b/integrations/coding-agents/README.md @@ -78,9 +78,16 @@ project `.nemo-flow/config.toml`, then `~/.config/nemo-flow/config.toml`. ```toml -[exporters] -atif_dir = ".nemo-flow/atif" -openinference_endpoint = "http://127.0.0.1:4318/v1/traces" +[exporters.atif] +dir = ".nemo-flow/atif" + +[exporters.atof] +dir = ".nemo-flow/atof" +mode = "append" # append | overwrite +filename_template = "{session_id}.jsonl" + +[exporters.openinference] +endpoint = "http://127.0.0.1:4318/v1/traces" [observability] metadata = { team = "agent-observability" } From 5e539765c6a173292889cdceb3ce24a442ca9fe4 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 12:55:48 -0700 Subject: [PATCH 11/15] Report invalid CLI config in doctor Signed-off-by: Ajay Thorve --- crates/cli/src/config.rs | 1 + crates/cli/src/doctor.rs | 40 ++++++++++++++++++-- crates/cli/tests/cli_tests.rs | 45 +++++++++++++++++++++++ crates/cli/tests/coverage/doctor_tests.rs | 26 +++++++++++++ 4 files changed, 109 insertions(+), 3 deletions(-) diff --git a/crates/cli/src/config.rs b/crates/cli/src/config.rs index 4c20a232..8f6fcdab 100644 --- a/crates/cli/src/config.rs +++ b/crates/cli/src/config.rs @@ -15,6 +15,7 @@ use crate::error::CliError; #[derive(Debug, Clone, Parser)] #[command(name = "nemo-flow")] #[command(about = "Coding-agent gateway for NeMo Flow observability")] +#[command(version)] pub(crate) struct Cli { #[command(flatten)] pub(crate) server: ServerArgs, diff --git a/crates/cli/src/doctor.rs b/crates/cli/src/doctor.rs index 6d365889..35c39c24 100644 --- a/crates/cli/src/doctor.rs +++ b/crates/cli/src/doctor.rs @@ -70,6 +70,7 @@ pub(crate) struct ConfigurationInfo { pub workspace: ConfigLayer, pub global: ConfigLayer, pub system: ConfigLayer, + pub resolution: Check, pub default_agent: Option, pub configured_agents: Vec, } @@ -100,7 +101,24 @@ pub(crate) struct AgentInfo { pub(crate) async fn collect_report( target_agent: Option, ) -> Result { - let resolved = resolve_server_config(&ServerArgs::default()).unwrap_or_default(); + let (resolved, resolution) = match resolve_server_config(&ServerArgs::default()) { + Ok(resolved) => ( + resolved, + Check { + name: "Resolution", + status: Status::Pass, + details: "valid".into(), + }, + ), + Err(err) => ( + ResolvedConfig::default(), + Check { + name: "Resolution", + status: Status::Fail, + details: format!("could not resolve merged config: {err}"), + }, + ), + }; let cwd = std::env::current_dir().ok(); let home = home_dir(); let configured_agents = configured_agent_names(&resolved.agents); @@ -110,7 +128,12 @@ pub(crate) async fn collect_report( binary_version: env!("CARGO_PKG_VERSION"), target_agent: target_agent.map(|agent| agent.as_arg().to_string()), environment: collect_environment(), - configuration: collect_configuration(cwd.as_deref(), home.as_deref(), configured_agents), + configuration: collect_configuration( + cwd.as_deref(), + home.as_deref(), + resolution, + configured_agents, + ), agents: collect_agents(target_agent, &resolved).await, observability: collect_observability(&resolved.gateway).await, completions: collect_completions(home.as_deref()), @@ -143,6 +166,7 @@ fn os_version() -> String { fn collect_configuration( cwd: Option<&Path>, home: Option<&Path>, + resolution: Check, configured_agents: Vec, ) -> ConfigurationInfo { let workspace_path = cwd @@ -160,6 +184,7 @@ fn collect_configuration( workspace: layer_status(&workspace_path), global: layer_status(&global_path), system: layer_status(&system_path), + resolution, // `default_agent` is reserved in the design for Phase 2 dispatch; not currently parsed // out of FileConfig. Doctor reports `None` until that lands. default_agent: None, @@ -631,7 +656,8 @@ pub(crate) fn exit_code(report: &DoctorReport) -> u8 { .any(|agent| matches!(agent.status, Status::Fail)) || matches!(report.configuration.workspace.status, Status::Fail) || matches!(report.configuration.global.status, Status::Fail) - || matches!(report.configuration.system.status, Status::Fail); + || matches!(report.configuration.system.status, Status::Fail) + || matches!(report.configuration.resolution.status, Status::Fail); u8::from(any_fail) } @@ -651,6 +677,7 @@ fn report_has_warn(report: &DoctorReport) -> bool { || matches!(report.configuration.workspace.status, Status::Warn) || matches!(report.configuration.global.status, Status::Warn) || matches!(report.configuration.system.status, Status::Warn) + || matches!(report.configuration.resolution.status, Status::Warn) } /// Renders the doctor report in the fixed human-readable layout the design doc shows. Sections @@ -688,6 +715,13 @@ pub(crate) fn format_human(report: &DoctorReport) -> String { " System {}\n", format_layer(&report.configuration.system) )); + if !matches!(report.configuration.resolution.status, Status::Pass) { + out.push_str(&format!( + " Resolution {} {}\n", + format_status(report.configuration.resolution.status), + report.configuration.resolution.details + )); + } if !report.configuration.configured_agents.is_empty() { out.push_str(&format!( " Agents {}\n", diff --git a/crates/cli/tests/cli_tests.rs b/crates/cli/tests/cli_tests.rs index 8ff8761d..14bf4f25 100644 --- a/crates/cli/tests/cli_tests.rs +++ b/crates/cli/tests/cli_tests.rs @@ -21,6 +21,17 @@ fn cli_help_exits_successfully() { assert!(String::from_utf8_lossy(&output.stdout).contains("Coding-agent gateway")); } +#[test] +fn cli_version_exits_successfully() { + let output = Command::new(gateway_bin()) + .arg("--version") + .output() + .unwrap(); + + assert!(output.status.success()); + assert!(String::from_utf8_lossy(&output.stdout).contains("nemo-flow ")); +} + #[test] fn cli_help_lists_easy_path_agent_shortcuts() { let output = Command::new(gateway_bin()).arg("--help").output().unwrap(); @@ -118,6 +129,40 @@ fn cli_bare_invocation_runs_doctor_when_config_exists() { assert!(stdout.contains("Agents detected")); } +#[test] +fn cli_bare_invocation_reports_invalid_config_resolution() { + let temp = tempfile::tempdir().unwrap(); + let xdg = temp.path().join("xdg"); + std::fs::create_dir_all(&xdg).unwrap(); + let cwd = temp.path().join("workdir"); + std::fs::create_dir_all(cwd.join(".nemo-flow")).unwrap(); + std::fs::write( + cwd.join(".nemo-flow/config.toml"), + r#" +[exporters.atof] +dir = "./atof" +mode = "replace" +"#, + ) + .unwrap(); + + let output = Command::new(gateway_bin()) + .current_dir(&cwd) + .env("XDG_CONFIG_HOME", &xdg) + .env("HOME", temp.path()) + .output() + .unwrap(); + + assert!( + !output.status.success(), + "bare invocation should fail doctor when config resolution fails" + ); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("Configuration")); + assert!(stdout.contains("Resolution")); + assert!(stdout.contains("invalid [exporters.atof].mode")); +} + #[test] fn cli_run_dry_run_resolves_config_and_command() { let temp = tempfile::tempdir().unwrap(); diff --git a/crates/cli/tests/coverage/doctor_tests.rs b/crates/cli/tests/coverage/doctor_tests.rs index 1a9d63e0..5ae70fc9 100644 --- a/crates/cli/tests/coverage/doctor_tests.rs +++ b/crates/cli/tests/coverage/doctor_tests.rs @@ -34,6 +34,11 @@ fn empty_report() -> DoctorReport { active: false, details: "not present".into(), }, + resolution: Check { + name: "Resolution", + status: Status::Pass, + details: "valid".into(), + }, default_agent: None, configured_agents: vec![], }, @@ -79,6 +84,14 @@ fn exit_code_fails_when_workspace_config_is_invalid() { assert_eq!(exit_code(&report), 1); } +#[test] +fn exit_code_fails_when_config_resolution_fails() { + let mut report = empty_report(); + report.configuration.resolution.status = Status::Fail; + report.configuration.resolution.details = "invalid gateway configuration shape".into(); + assert_eq!(exit_code(&report), 1); +} + #[test] fn exit_code_fails_when_agent_readiness_fails() { let mut report = empty_report(); @@ -168,6 +181,19 @@ fn format_human_reports_failure_summary_when_anything_failed() { assert!(rendered.contains("Some checks FAILED")); } +#[test] +fn format_human_reports_config_resolution_failure() { + let mut report = empty_report(); + report.configuration.resolution.status = Status::Fail; + report.configuration.resolution.details = + "could not resolve merged config: invalid [exporters.atof].mode".into(); + + let rendered = format_human(&report); + + assert!(rendered.contains("Resolution ✗ could not resolve merged config")); + assert!(rendered.contains("Some checks FAILED")); +} + #[test] fn format_human_distinguishes_pass_with_warnings_from_clean_pass() { let mut report = empty_report(); From cc5794d6d256d341226871c9393b71835acf1d61 Mon Sep 17 00:00:00 2001 From: Will Killian Date: Tue, 12 May 2026 16:43:22 -0400 Subject: [PATCH 12/15] feat(cli): route Codex ChatGPT-Plus OAuth to the ChatGPT backend Codex supports two auth flows: OPENAI_API_KEY (routed to api.openai.com/v1) and ChatGPT-Plus OAuth via `codex --login` (routed to chatgpt.com/backend-api/codex). Previously the gateway always forwarded to api.openai.com, which rejects the OAuth JWT. The gateway now detects the JWT (`Bearer eyJ` prefix) and, when no OPENAI_API_KEY is set, routes to the ChatGPT backend automatically. When OPENAI_API_KEY is present, the existing behavior is preserved (strip JWT, inject API key, route to api.openai.com). Key changes: - gateway: add dual-upstream routing via should_use_chatgpt_backend() and chatgpt_upstream_url() (no /v1 prefix for ChatGPT backend) - launcher: flip requires_openai_auth=true so Codex sends its own credentials; warn only when neither OPENAI_API_KEY nor ~/.codex/auth.json is present - setup: remove Codex upstream URL wizard question (auto-routing makes it unnecessary); keep the auth guide showing both options - Comments reference Codex source locations for key behavioral details Co-Authored-By: Claude Opus 4.6 (1M context) Signed-off-by: Will Killian --- crates/cli/src/gateway.rs | 127 ++++++++++++++------ crates/cli/src/launcher.rs | 47 +++++--- crates/cli/src/setup.rs | 69 ++--------- crates/cli/tests/coverage/config_tests.rs | 1 + crates/cli/tests/coverage/gateway_tests.rs | 71 ++++++++++- crates/cli/tests/coverage/launcher_tests.rs | 8 +- crates/cli/tests/coverage/server_tests.rs | 1 + crates/cli/tests/coverage/session_tests.rs | 19 +++ crates/cli/tests/coverage/setup_tests.rs | 37 ------ 9 files changed, 226 insertions(+), 154 deletions(-) diff --git a/crates/cli/src/gateway.rs b/crates/cli/src/gateway.rs index fcd4f538..b0561a8b 100644 --- a/crates/cli/src/gateway.rs +++ b/crates/cli/src/gateway.rs @@ -30,6 +30,12 @@ use crate::session::{GatewayCallPrep, LlmGatewayStart}; const MAX_BODY_BYTES: usize = 100 * 1024 * 1024; +// ChatGPT backend base URL used by Codex when authenticated with ChatGPT-Plus OAuth. Mirrors the +// `CHATGPT_CODEX_BASE_URL` constant in `codex-rs/model-provider-info/src/lib.rs`, which Codex +// selects in `ModelProviderInfo::to_api_provider` when `auth_mode` is `Chatgpt` or +// `ChatgptAuthTokens`. The standard `api.openai.com/v1` base is used for API-key auth instead. +const CHATGPT_CODEX_BASE_URL: &str = "https://chatgpt.com/backend-api/codex"; + /// Proxies supported LLM API requests through NeMo Flow's managed execution pipeline. /// /// The gateway buffers the inbound body once, opens a managed LLM call against the resolved @@ -81,14 +87,16 @@ async fn prepare_gateway_request( .await .map_err(|error| CliError::InvalidPayload(error.to_string()))?; let request_json = serde_json::from_slice::(&body_bytes).unwrap_or(Value::Null); - let upstream_url = provider.upstream_url( - config, - parts - .uri - .path_and_query() - .map(|p| p.as_str()) - .unwrap_or(parts.uri.path()), - ); + let path_and_query = parts + .uri + .path_and_query() + .map(|p| p.as_str()) + .unwrap_or(parts.uri.path()); + let upstream_url = if should_use_chatgpt_backend(provider, &parts.headers) { + chatgpt_upstream_url(path_and_query) + } else { + provider.upstream_url(config, path_and_query) + }; let streaming = request_json .get("stream") .and_then(Value::as_bool) @@ -538,9 +546,8 @@ fn encode_sse_frame(event_json: &Value, route: ProviderRoute) -> String { // Forwards the buffered request to the upstream provider with only the safe request headers. This // is shared by the buffered and streaming managed funcs so header filtering stays consistent. When -// the inbound request carries no auth (e.g., codex with `requires_openai_auth=false` per NMF-86) -// the gateway injects the provider's API key from environment so the upstream sees authenticated -// traffic without forcing the agent to manage credentials. +// `OPENAI_API_KEY` is set the gateway replaces any inbound ChatGPT-Plus OAuth JWT with the env +// key; otherwise the inbound credentials are forwarded as-is. async fn forward_upstream_request( http: &reqwest::Client, method: &Method, @@ -570,14 +577,50 @@ async fn forward_upstream_request( upstream.send().await } -// Removes JWT-shaped bearer tokens from inbound `Authorization` on OpenAI routes when we have -// a replacement `OPENAI_API_KEY` to inject. The detector triggers strictly on the `Bearer eyJ` -// prefix (base64-encoded JSON header), which is what Codex 0.130 sends from `~/.codex/auth.json` -// — that JWT is a consumer ChatGPT-Plus token rejected by `api.openai.com` / LiteLLM-fronted -// endpoints (NVIDIA's `inference-api.nvidia.com`) with 401. After stripping, `inject_provider_auth` -// substitutes the env-provided key and the upstream sees valid auth. ChatGPT OAuth flows in -// general may use opaque tokens too, but those don't match the prefix and are forwarded as-is. -// Real `sk-...` API keys are likewise unaffected. Tracks NMF-86. +// Builds the upstream URL for the ChatGPT backend. Codex's standard base URL is +// `api.openai.com/v1` (the `/v1` is part of the base), while the ChatGPT backend base is +// `chatgpt.com/backend-api/codex` (no `/v1`). Both append `/responses` to their base, so the +// ChatGPT path is `.../codex/responses`, not `.../codex/v1/responses`. Strip any `/v1` prefix +// that the gateway's route matcher may have included from the inbound request path. +fn chatgpt_upstream_url(path_and_query: &str) -> String { + let path = path_and_query.strip_prefix("/v1").unwrap_or(path_and_query); + format!("{CHATGPT_CODEX_BASE_URL}{path}") +} + +// Returns `true` when the `Authorization` header carries a JWT-shaped bearer token (`Bearer eyJ` +// prefix). Codex stores ChatGPT-Plus OAuth tokens in `~/.codex/auth.json` as a `TokenData` +// struct with `access_token`, `refresh_token`, and `id_token` fields (see +// `codex-rs/login/src/token_data.rs`). The access token is a JWT whose base64 header starts +// with `eyJ`. Real `sk-...` API keys and opaque tokens do not match this pattern. +fn has_chatgpt_jwt(headers: &HeaderMap) -> bool { + headers + .get(http::header::AUTHORIZATION) + .and_then(|value| value.to_str().ok()) + .is_some_and(|value| value.starts_with("Bearer eyJ")) +} + +// Returns `true` when the gateway should route the request to the ChatGPT backend +// (`chatgpt.com/backend-api/codex`) instead of the configured `openai_base_url`. Mirrors the +// base-URL selection in Codex's `ModelProviderInfo::to_api_provider` (`codex-rs/model-provider- +// info/src/lib.rs`): ChatGPT OAuth routes to `CHATGPT_CODEX_BASE_URL`, API-key auth routes to +// `api.openai.com/v1`. Fires when all of: (1) the route is OpenAI-family, (2) the inbound +// request carries a ChatGPT OAuth JWT, and (3) no `OPENAI_API_KEY` is available to substitute. +fn should_use_chatgpt_backend(route: ProviderRoute, headers: &HeaderMap) -> bool { + route.is_openai() + && has_chatgpt_jwt(headers) + && std::env::var("OPENAI_API_KEY") + .ok() + .filter(|v| !v.trim().is_empty()) + .is_none() +} + +// Removes JWT-shaped bearer tokens from inbound `Authorization` on OpenAI routes when we have a +// replacement `OPENAI_API_KEY` to inject. Codex's `BearerAuthProvider` (`codex-rs/model-provider/ +// src/bearer_auth_provider.rs`) always sets an `Authorization: Bearer ` header — either +// the ChatGPT OAuth access token (a JWT starting `eyJ`) or a plain API key (`sk-...`). When +// `OPENAI_API_KEY` is set, the gateway strips the JWT so `inject_provider_auth` can substitute +// the env key for the `api.openai.com` route. When the key is absent, the JWT is preserved and +// `should_use_chatgpt_backend` routes to the ChatGPT backend. Real `sk-...` keys are unaffected. fn strip_chatgpt_oauth_for_openai_route( headers: &HeaderMap, route: ProviderRoute, @@ -605,11 +648,11 @@ fn strip_chatgpt_oauth_for_openai_route( } // If the inbound request has no provider auth header (Authorization / x-api-key / api-key), read -// the provider's standard API key env var and attach it to the outbound request. Tracks NMF-86: -// codex 0.130 prefers `~/.codex/auth.json` ChatGPT-Plus OAuth over `OPENAI_API_KEY` and that JWT is -// rejected by `api.openai.com`, so codex now runs with `requires_openai_auth=false` and the -// gateway owns credentials. If neither inbound auth nor the env var is present, the request is -// forwarded as-is and the upstream returns a real 401 (caller can detect and surface). +// the provider's standard API key env var and attach it to the outbound request. When codex sends +// its ChatGPT-Plus OAuth JWT the gateway forwards it unless `OPENAI_API_KEY` is set, in which case +// `strip_chatgpt_oauth_for_openai_route` removes the JWT first and this function injects the env +// key. If neither inbound auth nor the env var is present, the request is forwarded as-is and the +// upstream returns a real 401 (caller can detect and surface). fn inject_provider_auth( builder: reqwest::RequestBuilder, route: ProviderRoute, @@ -724,14 +767,16 @@ pub(crate) async fn models( ); } let provider = ProviderRoute::OpenAiModels; - let upstream_url = provider.upstream_url( - &state.config, - parts - .uri - .path_and_query() - .map(|p| p.as_str()) - .unwrap_or(parts.uri.path()), - ); + let path_and_query = parts + .uri + .path_and_query() + .map(|p| p.as_str()) + .unwrap_or(parts.uri.path()); + let upstream_url = if should_use_chatgpt_backend(provider, &parts.headers) { + chatgpt_upstream_url(path_and_query) + } else { + provider.upstream_url(&state.config, path_and_query) + }; // Whitespace-only keys are effectively missing: stripping the inbound JWT and injecting an // empty/whitespace bearer just trades one 401 for another while losing observability. let has_openai_env = std::env::var("OPENAI_API_KEY") @@ -779,6 +824,13 @@ impl ProviderRoute { } } + const fn is_openai(self) -> bool { + matches!( + self, + Self::OpenAiResponses | Self::OpenAiChatCompletions | Self::OpenAiModels + ) + } + // Returns the provider route name recorded in LLM event metadata. These names split OpenAI API // variants because their request/response schemas differ even when they share a base URL. const fn name(self) -> &'static str { @@ -797,12 +849,19 @@ impl ProviderRoute { fn upstream_url(self, config: &crate::config::GatewayConfig, path_and_query: &str) -> String { let base = match self { Self::OpenAiResponses | Self::OpenAiChatCompletions | Self::OpenAiModels => { - config.openai_base_url.trim_end_matches('/') + config.openai_base_url.as_str() } Self::AnthropicMessages | Self::AnthropicCountTokens => { - config.anthropic_base_url.trim_end_matches('/') + config.anthropic_base_url.as_str() } }; + self.upstream_url_with_base(base, path_and_query) + } + + // Like `upstream_url` but with an explicit base URL. Used by the ChatGPT OAuth fallback path + // which routes to `CHATGPT_CODEX_BASE_URL` instead of the configured `openai_base_url`. + fn upstream_url_with_base(self, base: &str, path_and_query: &str) -> String { + let base = base.trim_end_matches('/'); let path_and_query = match self { Self::OpenAiResponses | Self::OpenAiChatCompletions | Self::OpenAiModels if !path_and_query.starts_with("/v1/") => diff --git a/crates/cli/src/launcher.rs b/crates/cli/src/launcher.rs index 8d7ab260..a3ffd5f9 100644 --- a/crates/cli/src/launcher.rs +++ b/crates/cli/src/launcher.rs @@ -371,21 +371,31 @@ impl PreparedRun { // overriding `model_providers.openai`. Uses `features.hooks=true` introduced in codex-cli // 0.129; the older `features.codex_hooks` is deprecated. Requires codex-cli >= 0.129.0. fn prepare_codex(&mut self, gateway_url: &str) { - // Codex provider override now uses `requires_openai_auth=false` (see NMF-86): codex no - // longer sends credentials, the gateway injects `OPENAI_API_KEY` instead. Surface the - // missing-key state EARLY on stderr — a buried `self.notes.push` only renders under - // `--print` / `--dry-run`, which means the silent live-run case (the one users actually - // hit) would discover the missing key as a confusing 401 mid-session. - if std::env::var("OPENAI_API_KEY") + // Codex resolves auth via `CodexAuth::from_auth_dot_json` (`codex-rs/login/src/auth/ + // manager.rs`): `auth_mode=ApiKey` uses `OPENAI_API_KEY`, `auth_mode=Chatgpt` uses the + // OAuth token from `~/.codex/auth.json`. With `requires_openai_auth=true` the provider + // config tells Codex to attach whichever credential it has. The gateway then either + // substitutes `OPENAI_API_KEY` (routing to `api.openai.com`) or forwards the JWT as-is + // (routing to `chatgpt.com/backend-api/codex`). Warn when neither source is present. + let has_openai_key = std::env::var("OPENAI_API_KEY") .ok() - .is_none_or(|v| v.is_empty()) - { + .is_some_and(|v| !v.is_empty()); + // Codex persists OAuth tokens to `~/.codex/auth.json` via `AuthDotJson` in + // `codex-rs/login/src/auth/storage.rs`. Check for the file rather than parsing it — + // Codex handles token refresh itself at runtime. + let has_codex_auth = std::env::var_os("HOME") + .or_else(|| std::env::var_os("USERPROFILE")) + .map(|h| { + std::path::PathBuf::from(h) + .join(".codex/auth.json") + .exists() + }) + .unwrap_or(false); + if !has_openai_key && !has_codex_auth { eprintln!( - "warning: OPENAI_API_KEY is not set. Codex routes through the NeMo Flow gateway, \ - which forwards to api.openai.com using OPENAI_API_KEY from the environment. \ - Without it the upstream will return 401. Export your key before launching codex \ - (e.g. `export OPENAI_API_KEY=sk-...`), or pass `--openai-base-url` to an upstream \ - that needs no key." + "warning: No OpenAI credentials found. Either export OPENAI_API_KEY \ + (e.g. `export OPENAI_API_KEY=sk-...`), log in to codex (`codex --login`), \ + or pass `--openai-base-url` to an upstream that needs no key." ); } let hook_command = hook_forward_command(&transparent_hook_executable(), CodingAgent::Codex); @@ -611,11 +621,14 @@ fn codex_gateway_provider_config(gateway_url: &str) -> String { // upstreams the user falls back to daemon mode and points codex directly at its configured // upstream — we observe hooks but not LLM calls. // - // `requires_openai_auth=false` so codex doesn't send the ChatGPT-Plus OAuth JWT from - // `~/.codex/auth.json` (the JWT is rejected by `api.openai.com` with 401). The gateway - // injects `OPENAI_API_KEY` itself; see `gateway.rs::inject_provider_auth`. Tracks NMF-86. + // `requires_openai_auth=true` so Codex's `resolve_provider_auth` (`codex-rs/model-provider/ + // src/auth.rs`) attaches credentials via `BearerAuthProvider`. When the auth mode is + // `Chatgpt` the token is an OAuth JWT; when `ApiKey` it is the `OPENAI_API_KEY` value. + // The gateway inspects the inbound `Authorization` header: if `OPENAI_API_KEY` is set in the + // environment the JWT is replaced (see `gateway.rs::strip_chatgpt_oauth_for_openai_route` + // and `inject_provider_auth`); otherwise the JWT is forwarded to the ChatGPT backend. format!( - "model_providers.nemo-flow-openai={{name=\"NeMo Flow OpenAI\",base_url={},wire_api=\"responses\",requires_openai_auth=false,supports_websockets=false}}", + "model_providers.nemo-flow-openai={{name=\"NeMo Flow OpenAI\",base_url={},wire_api=\"responses\",requires_openai_auth=true,supports_websockets=false}}", toml_string(gateway_url) ) } diff --git a/crates/cli/src/setup.rs b/crates/cli/src/setup.rs index ef1dfa43..6fcc7703 100644 --- a/crates/cli/src/setup.rs +++ b/crates/cli/src/setup.rs @@ -67,11 +67,6 @@ pub(crate) struct SetupAnswers { pub agents: Vec, pub backends: Vec, pub openinference_endpoint: Option, - /// Custom OpenAI-compatible upstream URL written to `[upstream] openai_base_url`. `None` - /// when the user keeps the default (`api.openai.com`) — keeps minimal configs minimal. - /// Currently surfaced by the codex setup branch; reusable by any future agent on the - /// OpenAI route family. - pub openai_base_url: Option, /// Path recorded under `[agents.hermes].hooks_path` when hermes is selected. Set by `run` /// from `hermes_hooks_path_for_scope` so the wizard preview shows the file the launcher /// will reference. `None` when hermes wasn't selected. @@ -157,12 +152,6 @@ pub(crate) fn build_config(answers: &SetupAnswers) -> DocumentMut { doc["agents"] = Item::Table(agents_table); } - if let Some(base_url) = answers.openai_base_url.as_deref() { - let mut upstream = Table::new(); - upstream["openai_base_url"] = value(base_url); - doc["upstream"] = Item::Table(upstream); - } - doc } @@ -399,19 +388,15 @@ pub(crate) fn prompt_user( }; let (backends, openinference_endpoint) = ask_backends(&theme, &defaults)?; - let openai_base_url = if agents.contains(&CodingAgent::Codex) { + if agents.contains(&CodingAgent::Codex) { print_codex_api_key_guide(); - ask_openai_base_url(&theme, defaults.openai_base_url.as_deref())? - } else { - None - }; + } Ok(SetupAnswers { scope, agents, backends, openinference_endpoint, - openai_base_url, hermes_hooks_path: None, }) } @@ -481,7 +466,6 @@ struct Defaults { agents: Vec, atif_enabled: bool, openinference_endpoint: Option, - openai_base_url: Option, } impl Defaults { @@ -490,7 +474,6 @@ impl Defaults { || !self.agents.is_empty() || self.atif_enabled || self.openinference_endpoint.is_some() - || self.openai_base_url.is_some() } } @@ -541,12 +524,6 @@ fn read_existing_defaults() -> Option { .and_then(|t| t.get("endpoint")) .and_then(|i| i.as_str()) .map(str::to_string), - openai_base_url: doc - .get("upstream") - .and_then(|i| i.as_table()) - .and_then(|t| t.get("openai_base_url")) - .and_then(|i| i.as_str()) - .map(str::to_string), }) } @@ -571,43 +548,21 @@ fn read_agents_from_doc(doc: &DocumentMut) -> Vec { } fn print_codex_api_key_guide() { - // Codex 0.130 only accepts `wire_api="responses"` (codex#7782 removed `chat`), so codex - // transparent run requires a Responses-compatible upstream. The gateway injects the API - // key on outbound forwards (NMF-86) — user just sets OPENAI_API_KEY in their environment; - // any Bearer-token key works (OpenAI, internal proxy, etc.) as long as the upstream - // accepts it. + // Codex supports two auth flows (see `codex-rs/login/src/auth/manager.rs`): + // 1. ChatGPT-Plus PKCE OAuth via `codex --login` → tokens stored in `~/.codex/auth.json` + // 2. OpenAI API key via `OPENAI_API_KEY` env var + // The gateway routes to the correct upstream automatically: ChatGPT OAuth goes to + // `chatgpt.com/backend-api/codex`, API key goes to `api.openai.com`. println!(); println!(" ℹ Codex sends Responses-API requests through the gateway."); - println!(" The gateway injects OPENAI_API_KEY on outbound forwards. Set it before"); - println!(" launching codex: export OPENAI_API_KEY=..."); - println!(" Any Bearer-token key works (OpenAI developer key, internal proxy, etc.)"); - println!(" — the ChatGPT-Plus OAuth in ~/.codex/auth.json is NOT used."); + println!(" Authentication (pick one):"); + println!(" • ChatGPT-Plus login: codex --login (uses ~/.codex/auth.json)"); + println!(" • OpenAI API key: export OPENAI_API_KEY=sk-..."); + println!(" When OPENAI_API_KEY is set the gateway uses it; otherwise the"); + println!(" ChatGPT-Plus OAuth token is forwarded to the ChatGPT backend."); println!(); } -fn ask_openai_base_url( - theme: &ColorfulTheme, - existing: Option<&str>, -) -> Result, CliError> { - // Pre-fill with the existing `[upstream] openai_base_url` if there is one, else the OpenAI - // default. We return Some only when the user's value differs from the OpenAI default — - // matching the upstream behavior (writes minimal configs, omits the default). - let initial = existing.unwrap_or("https://api.openai.com"); - let url: String = Input::with_theme(theme) - .with_prompt("Codex upstream URL (Responses-compatible)") - .with_initial_text(initial) - .interact_text() - .map_err(setup_error)?; - // Treat blank input the same as accepting the default — otherwise `openai_base_url = ""` - // lands in config.toml and the launcher tries to use an empty URL on the next run. - let url = url.trim(); - if url.is_empty() || url == "https://api.openai.com" { - Ok(None) - } else { - Ok(Some(url.to_string())) - } -} - fn ensure_tty() -> Result<(), CliError> { if !std::io::stdin().is_terminal() { return Err(CliError::Config( diff --git a/crates/cli/tests/coverage/config_tests.rs b/crates/cli/tests/coverage/config_tests.rs index e31208ea..aa804596 100644 --- a/crates/cli/tests/coverage/config_tests.rs +++ b/crates/cli/tests/coverage/config_tests.rs @@ -9,6 +9,7 @@ fn config() -> GatewayConfig { GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai".into(), + anthropic_base_url: "http://anthropic".into(), atif_dir: Some(PathBuf::from("default-atif")), openinference_endpoint: Some("http://default-otel".into()), diff --git a/crates/cli/tests/coverage/gateway_tests.rs b/crates/cli/tests/coverage/gateway_tests.rs index 3b15abe9..7aa94fc6 100644 --- a/crates/cli/tests/coverage/gateway_tests.rs +++ b/crates/cli/tests/coverage/gateway_tests.rs @@ -81,6 +81,7 @@ fn provider_routes_preserve_path_query_and_choose_upstream() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai/".into(), + anthropic_base_url: "http://anthropic/".into(), atif_dir: None, openinference_endpoint: None, @@ -204,11 +205,9 @@ fn observable_headers_omit_secrets_and_transport_headers() { #[test] fn strips_chatgpt_plus_jwt_from_openai_route_inbound() { - // NMF-86: codex 0.130 still sends the ChatGPT-Plus OAuth JWT from ~/.codex/auth.json on - // outbound requests even when its provider override sets `requires_openai_auth=false`. The - // JWT is a consumer token rejected by api.openai.com / LiteLLM-fronted endpoints with 401. - // The gateway strips JWT-shaped (`Bearer eyJ...`) Authorization on OpenAI routes so the - // auth-injection path falls through and substitutes a real env-provided key. + // When OPENAI_API_KEY is set the gateway strips JWT-shaped (`Bearer eyJ...`) Authorization + // from inbound OpenAI-route requests so the auth-injection path substitutes the env key + // instead of forwarding the ChatGPT-Plus OAuth JWT. let mut inbound = HeaderMap::new(); inbound.insert( "authorization", @@ -342,11 +341,72 @@ fn skips_injection_when_env_var_unset() { assert!(built.headers().get("authorization").is_none()); } +// --- ChatGPT backend routing tests --- + +#[test] +fn chatgpt_jwt_routes_to_chatgpt_backend_when_no_api_key() { + let mut headers = HeaderMap::new(); + headers.insert( + "authorization", + HeaderValue::from_static("Bearer eyJhbGciOiJIUzI1NiJ9.deadbeef.signature"), + ); + // With no OPENAI_API_KEY and a JWT, should_use_chatgpt_backend returns true and the URL is + // built against the ChatGPT backend (no /v1 prefix — ChatGPT backend doesn't use it). + let result = chatgpt_upstream_url("/responses"); + assert_eq!(result, "https://chatgpt.com/backend-api/codex/responses"); + + // has_chatgpt_jwt should detect the JWT + assert!(has_chatgpt_jwt(&headers)); + assert!(ProviderRoute::OpenAiResponses.is_openai()); +} + +#[test] +fn no_jwt_does_not_trigger_chatgpt_backend() { + let mut headers = HeaderMap::new(); + headers.insert( + "authorization", + HeaderValue::from_static("Bearer sk-real-api-key"), + ); + assert!(!has_chatgpt_jwt(&headers)); + + // Empty headers also should not trigger + assert!(!has_chatgpt_jwt(&HeaderMap::new())); +} + +#[test] +fn anthropic_route_never_triggers_chatgpt_backend() { + let mut headers = HeaderMap::new(); + headers.insert( + "authorization", + HeaderValue::from_static("Bearer eyJhbGciOiJIUzI1NiJ9.deadbeef.signature"), + ); + assert!(!ProviderRoute::AnthropicMessages.is_openai()); +} + +#[test] +fn chatgpt_backend_url_omits_v1_prefix() { + // The ChatGPT backend expects paths directly under the base, not /v1-prefixed. + assert_eq!( + chatgpt_upstream_url("/responses"), + "https://chatgpt.com/backend-api/codex/responses" + ); + assert_eq!( + chatgpt_upstream_url("/models"), + "https://chatgpt.com/backend-api/codex/models" + ); + // /v1-prefixed inbound paths are stripped + assert_eq!( + chatgpt_upstream_url("/v1/responses"), + "https://chatgpt.com/backend-api/codex/responses" + ); +} + #[tokio::test] async fn passthrough_rejects_unsupported_provider_path_directly() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai".into(), + anthropic_base_url: "http://anthropic".into(), atif_dir: None, openinference_endpoint: None, @@ -374,6 +434,7 @@ async fn models_rejects_non_get_requests_directly() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://openai".into(), + anthropic_base_url: "http://anthropic".into(), atif_dir: None, openinference_endpoint: None, diff --git a/crates/cli/tests/coverage/launcher_tests.rs b/crates/cli/tests/coverage/launcher_tests.rs index 861b96bb..dffb6015 100644 --- a/crates/cli/tests/coverage/launcher_tests.rs +++ b/crates/cli/tests/coverage/launcher_tests.rs @@ -222,10 +222,10 @@ fn prepares_codex_config_overrides() { .iter() .any(|arg| arg.contains("model_providers.nemo-flow-openai") && arg.contains("base_url=\"http://127.0.0.1:1234\"") - // NMF-86 mitigation: codex must NOT send credentials. The gateway injects - // OPENAI_API_KEY itself, so the JWT from ~/.codex/auth.json never reaches - // api.openai.com. - && arg.contains("requires_openai_auth=false") + // Codex sends its own credentials (ChatGPT-Plus OAuth or OPENAI_API_KEY). + // When OPENAI_API_KEY is in the environment the gateway substitutes it; + // otherwise codex's own auth is forwarded as-is. + && arg.contains("requires_openai_auth=true") && arg.contains("supports_websockets=false")) ); assert!( diff --git a/crates/cli/tests/coverage/server_tests.rs b/crates/cli/tests/coverage/server_tests.rs index ad68064f..d7c914f2 100644 --- a/crates/cli/tests/coverage/server_tests.rs +++ b/crates/cli/tests/coverage/server_tests.rs @@ -36,6 +36,7 @@ fn test_config() -> GatewayConfig { GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, diff --git a/crates/cli/tests/coverage/session_tests.rs b/crates/cli/tests/coverage/session_tests.rs index 3f218980..20e6f870 100644 --- a/crates/cli/tests/coverage/session_tests.rs +++ b/crates/cli/tests/coverage/session_tests.rs @@ -12,6 +12,7 @@ async fn nests_agent_subagent_and_tool_lifecycle() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -88,6 +89,7 @@ async fn writes_atif_on_session_end_from_header_config() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -151,6 +153,7 @@ async fn duplicate_agent_end_does_not_overwrite_atif_with_empty_session() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: Some(temp.path().to_path_buf()), openinference_endpoint: None, @@ -227,6 +230,7 @@ async fn writes_hermes_api_hook_usage_to_atif_metrics() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -306,6 +310,7 @@ async fn handles_out_of_order_subagent_and_tool_end_events() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -385,6 +390,7 @@ async fn out_of_order_started_subagent_end_does_not_leak_scope() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -457,6 +463,7 @@ async fn agent_end_closes_nested_active_subagents_lifo() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -513,6 +520,7 @@ async fn llm_lifecycle_starts_implicit_gateway_session() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -606,6 +614,7 @@ async fn llm_lifecycle_uses_single_active_hook_session_when_header_is_missing() let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -663,6 +672,7 @@ async fn single_pending_llm_hint_claims_next_gateway_llm() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -760,6 +770,7 @@ async fn multiple_llm_hints_resolve_by_generation_id() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -875,6 +886,7 @@ async fn ambiguous_llm_hints_fall_back_to_agent_scope() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -974,6 +986,7 @@ async fn no_active_hint_reuses_last_llm_owner() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -1634,6 +1647,7 @@ fn session_test_config() -> GatewayConfig { GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: None, openinference_endpoint: None, @@ -1652,6 +1666,7 @@ async fn gateway_first_anthropic_call_labels_session_as_claude_code() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: Some(temp.path().to_path_buf()), openinference_endpoint: None, @@ -1700,6 +1715,7 @@ async fn gateway_first_openai_responses_call_labels_session_as_codex() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: Some(temp.path().to_path_buf()), openinference_endpoint: None, @@ -1744,6 +1760,7 @@ async fn synthetic_gateway_session_keeps_gateway_label() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: Some(temp.path().to_path_buf()), openinference_endpoint: None, @@ -1790,6 +1807,7 @@ async fn turn_ended_snapshots_atif_without_closing_scope() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: Some(temp.path().to_path_buf()), openinference_endpoint: None, @@ -1872,6 +1890,7 @@ async fn turn_ended_is_noop_for_session_with_no_agent_scope() { let config = GatewayConfig { bind: "127.0.0.1:0".parse().unwrap(), openai_base_url: "http://127.0.0.1".into(), + anthropic_base_url: "http://127.0.0.1".into(), atif_dir: Some(temp.path().to_path_buf()), openinference_endpoint: None, diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index c308a5d1..f90a0139 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -81,7 +81,6 @@ fn build_config_emits_observability_section_when_atif_selected() { agents: vec![], backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; @@ -100,7 +99,6 @@ fn build_config_emits_export_section_when_openinference_selected() { agents: vec![], backends: vec![ObservabilityBackend::OpenInference], openinference_endpoint: Some("http://localhost:6006/v1/traces".into()), - openai_base_url: None, hermes_hooks_path: None, }; @@ -118,7 +116,6 @@ fn build_config_skips_empty_sections_when_no_backends_selected() { agents: vec![], backends: vec![], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; @@ -137,7 +134,6 @@ fn build_config_emits_agents_block_with_user_facing_keys() { agents: vec![CodingAgent::ClaudeCode, CodingAgent::Codex], backends: vec![], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; @@ -151,35 +147,6 @@ fn build_config_emits_agents_block_with_user_facing_keys() { assert!(rendered.contains(r#"command = "codex""#)); } -#[test] -fn build_config_writes_upstream_block_for_custom_openai_base_url() { - let answers = SetupAnswers { - scope: ConfigScope::Project, - agents: vec![CodingAgent::Codex], - backends: vec![ObservabilityBackend::Atif], - openinference_endpoint: None, - openai_base_url: Some("https://litellm.internal/v1".into()), - hermes_hooks_path: None, - }; - let rendered = build_config(&answers).to_string(); - assert!(rendered.contains("[upstream]")); - assert!(rendered.contains(r#"openai_base_url = "https://litellm.internal/v1""#)); -} - -#[test] -fn build_config_omits_upstream_block_when_openai_base_url_is_none() { - let answers = SetupAnswers { - scope: ConfigScope::Project, - agents: vec![CodingAgent::Codex], - backends: vec![ObservabilityBackend::Atif], - openinference_endpoint: None, - openai_base_url: None, - hermes_hooks_path: None, - }; - let rendered = build_config(&answers).to_string(); - assert!(!rendered.contains("[upstream]")); -} - #[test] fn save_config_writes_project_scope_to_workspace_dir() { let answers = SetupAnswers { @@ -187,7 +154,6 @@ fn save_config_writes_project_scope_to_workspace_dir() { agents: vec![CodingAgent::ClaudeCode], backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -232,7 +198,6 @@ command = "codex --full-auto" agents: vec![CodingAgent::ClaudeCode], backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -281,7 +246,6 @@ fn save_config_writes_both_scopes_when_both_selected() { agents: vec![], backends: vec![ObservabilityBackend::Atif], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; let doc = build_config(&answers); @@ -302,7 +266,6 @@ fn build_config_emits_hooks_path_for_hermes_when_set() { agents: vec![CodingAgent::Hermes], backends: vec![], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: Some(std::path::PathBuf::from("/tmp/proj/.hermes/config.yaml")), }; let rendered = build_config(&answers).to_string(); From 81475c6894547d82c8462772c38afd02a94f6c95 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 14:03:29 -0700 Subject: [PATCH 13/15] Fix CLI setup test initializer Signed-off-by: Ajay Thorve --- crates/cli/tests/coverage/setup_tests.rs | 1 - 1 file changed, 1 deletion(-) diff --git a/crates/cli/tests/coverage/setup_tests.rs b/crates/cli/tests/coverage/setup_tests.rs index 8a00524c..403702c3 100644 --- a/crates/cli/tests/coverage/setup_tests.rs +++ b/crates/cli/tests/coverage/setup_tests.rs @@ -121,7 +121,6 @@ fn build_config_emits_atof_write_options_when_atof_selected() { agents: vec![], backends: vec![ObservabilityBackend::Atof], openinference_endpoint: None, - openai_base_url: None, hermes_hooks_path: None, }; From e289e9c041e2be322f707fdfec9ced5677425f14 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 14:06:17 -0700 Subject: [PATCH 14/15] Add CLI crate README Signed-off-by: Ajay Thorve --- crates/cli/README.md | 123 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 123 insertions(+) create mode 100644 crates/cli/README.md diff --git a/crates/cli/README.md b/crates/cli/README.md new file mode 100644 index 00000000..90e098b6 --- /dev/null +++ b/crates/cli/README.md @@ -0,0 +1,123 @@ + + +[![License](https://img.shields.io/github/license/NVIDIA/NeMo-Flow)](https://github.com/NVIDIA/NeMo-Flow/blob/main/LICENSE) +[![GitHub](https://img.shields.io/badge/github-repo-blue?logo=github)](https://github.com/NVIDIA/NeMo-Flow/) +[![Release](https://img.shields.io/github/v/release/NVIDIA/NeMo-Flow?color=green)](https://github.com/NVIDIA/NeMo-Flow/releases) +[![Codecov](https://codecov.io/gh/NVIDIA/NeMo-Flow/branch/main/graph/badge.svg)](https://app.codecov.io/gh/NVIDIA/NeMo-Flow) +[![PyPI](https://img.shields.io/pypi/v/nemo-flow?color=4B8BBE&logo=pypi)](https://pypi.org/project/nemo-flow/) +[![npm node](https://img.shields.io/npm/v/nemo-flow-node?label=nemo-flow-node&color=CC3534&logo=npm)](https://www.npmjs.com/package/nemo-flow-node) +[![npm wasm](https://img.shields.io/npm/v/nemo-flow-wasm?label=nemo-flow-wasm&color=CC3534&logo=npm)](https://www.npmjs.com/package/nemo-flow-wasm) +[![Crates.io](https://img.shields.io/crates/v/nemo-flow?label=nemo-flow&color=B7410E&logo=rust)](https://crates.io/crates/nemo-flow) +[![Crates.io](https://img.shields.io/crates/v/nemo-flow-adaptive?label=nemo-flow-adaptive&color=B7410E&logo=rust)](https://crates.io/crates/nemo-flow-adaptive) +[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/NVIDIA/NeMo-Flow) + +# nemo-flow-cli + +`nemo-flow-cli` is the coding-agent gateway CLI for NeMo Flow observability. +It installs the `nemo-flow` binary, which can configure supported coding-agent +hooks, run agents through an ephemeral gateway, and diagnose local agent and +exporter readiness. + +The CLI is a Rust package in this repository, but most users should interact +with the installed `nemo-flow` command rather than link against the crate. + +## Why Use It? + +- 🧭 **Observe existing coding agents**: Run Claude Code, Codex, Cursor, or + Hermes through a local NeMo Flow gateway without changing the agent itself. +- 🛠️ **Configure hooks interactively**: Use the setup wizard to write project or + user config and install the hook files needed by supported agents. +- 📡 **Export local sessions**: Write ATIF trajectory files, ATOF event JSONL + streams, or OpenInference spans from one shared config model. +- 🩺 **Diagnose the machine**: Check config layers, agent binaries, hook status, + observability outputs, and shell completions with `nemo-flow doctor`. + +## What You Get + +- ✅ **`nemo-flow` binary**: The executable installed by the `nemo-flow-cli` + Cargo package. +- ✅ **First-run setup**: Bare `nemo-flow` launches setup when no config exists, + then runs doctor once config is present. +- ✅ **Agent shortcuts**: `nemo-flow claude`, `nemo-flow codex`, + `nemo-flow cursor`, and `nemo-flow hermes` start observed agent runs. +- ✅ **Config-driven launch**: `nemo-flow run` resolves config, environment, and + CLI overrides for deterministic non-interactive use. +- ✅ **Hook forwarding server**: A local gateway accepts agent hook events and + provider-shaped OpenAI or Anthropic requests. + +## Installation + +Install the CLI from a repository checkout: + +```bash +cargo install --path crates/cli +``` + +That command installs the binary as: + +```bash +nemo-flow --version +``` + +For local development, build and test the package directly: + +```bash +cargo build -p nemo-flow-cli +cargo test -p nemo-flow-cli +``` + +## Getting Started + +Run the first-time setup wizard: + +```bash +nemo-flow +``` + +After setup, inspect local readiness: + +```bash +nemo-flow doctor +``` + +Run a supported agent through the gateway: + +```bash +nemo-flow codex +nemo-flow claude -- chat "summarize this repository" +``` + +Use `run --dry-run` to inspect resolved config without spawning the agent: + +```bash +nemo-flow run --agent codex --dry-run +``` + +## Configuration + +Project config lives at `./.nemo-flow/config.toml`; user config lives at +`~/.config/nemo-flow/config.toml` or `$XDG_CONFIG_HOME/nemo-flow/config.toml`. +The project layer overrides system config, and the user layer overrides the +project layer. + +Exporter config uses nested per-backend tables: + +```toml +[exporters.atif] +dir = "./atif" + +[exporters.atof] +dir = "./atof" +mode = "append" +filename_template = "{session_id}.jsonl" + +[exporters.openinference] +endpoint = "http://localhost:6006/v1/traces" +``` + +## Documentation + +NeMo Flow Documentation: https://nvidia.github.io/NeMo-Flow From 4536b5f99922fca744f6ddb3d7c65bc0624f2972 Mon Sep 17 00:00:00 2001 From: Ajay Thorve Date: Tue, 12 May 2026 14:08:19 -0700 Subject: [PATCH 15/15] command fix Signed-off-by: Ajay Thorve --- crates/cli/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/crates/cli/README.md b/crates/cli/README.md index 90e098b6..5ab4e588 100644 --- a/crates/cli/README.md +++ b/crates/cli/README.md @@ -87,7 +87,7 @@ Run a supported agent through the gateway: ```bash nemo-flow codex -nemo-flow claude -- chat "summarize this repository" +nemo-flow claude -- "summarize this repository" ``` Use `run --dry-run` to inspect resolved config without spawning the agent: