Add:Added Gemini AI Support#1
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds two new AI providers (Google Gemini and Google Translate), provider selection via CLI, pricing config files, provider implementations with batching/retries/costing, provider factory and README updates, package dependency/script additions, prompt loader robustness fixes, and new unit/integration tests for Google Translate. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant ProviderFactory
participant GeminiProvider
participant GeminiAPI
CLI->>ProviderFactory: createProvider(config, logger)
ProviderFactory->>CLI: Provider instance (GeminiProvider)
CLI->>GeminiProvider: initialize()
CLI->>GeminiProvider: translateBatch(batch, targetLang, ...)
GeminiProvider->>GeminiAPI: send chat/translation request(s)
GeminiAPI-->>GeminiProvider: respond with XML/JSON
GeminiProvider->>CLI: parsed translations + cost + debug info
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
src/providers/ProviderFactory.js (1)
6-8: Update the class documentationThe comment states "Currently, supports OpenAI" but the class now supports both OpenAI and Gemini providers.
* Creates and configures AI translation providers based on configuration. - * Currently, supports OpenAI with a unified interface for future providers. + * Supports OpenAI and Gemini providers with a unified interface.src/providers/gemini/GeminiProvider.js (4)
238-240: Use optional chaining for cleaner code- if (this.providerPricing && this.providerPricing.models) { - return Object.keys(this.providerPricing.models).sort(); + if (this.providerPricing?.models) { + return Object.keys(this.providerPricing.models).sort();
285-285: Fix typo in comment- // Use conservative.1.4x multiplier for Gemini. + // Use conservative 1.4x multiplier for Gemini.
439-440: Use optional chaining for cleaner code- if (debugConfig && debugConfig.saveDebugInfo) { + if (debugConfig?.saveDebugInfo) {
755-756: Fix typo in comment- // Conservative estimate based on.: + // Conservative estimate based on:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
config/gemini-pricing.json(1 hunks)package.json(2 hunks)src/config/index.js(1 hunks)src/providers/ProviderFactory.js(4 hunks)src/providers/gemini/GeminiProvider.js(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/providers/ProviderFactory.js (1)
src/providers/gemini/GeminiProvider.js (1)
GeminiProvider(14-762)
🪛 Biome (1.9.4)
src/providers/gemini/GeminiProvider.js
[error] 238-238: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 439-439: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (6)
config/gemini-pricing.json (1)
3-14: Model names don't match PR objectivesThe PR objectives mention support for
gemini-2.5-pro,gemini-2.5-flash, andgemini-2.5-flash-lite, but the configuration contains different model names (gemini-1.5-flash-latest,gemini-1.5-pro-latest,gemini-pro). Please verify which models should be supported.Likely an incorrect or invalid review comment.
package.json (1)
35-35: LGTM!The new script and dependency are properly added following the existing patterns.
Also applies to: 49-49
src/config/index.js (1)
219-221: LGTM!The CLI options are properly updated to support multiple providers with clear descriptions.
src/providers/gemini/GeminiProvider.js (3)
456-459: Verify token counting accuracyThe current implementation counts tokens by joining all message contents, but this might not match how Gemini actually counts tokens (which could include role information and message boundaries).
Consider verifying that the token counting matches Gemini's actual usage reporting to ensure accurate cost calculations.
532-534: Well-structured retry logicGood implementation of selective retry logic that avoids retrying on authentication/authorization errors.
14-762: Well-implemented provider with comprehensive featuresThe GeminiProvider implementation is thorough and follows good practices:
- Proper error handling and retry logic
- Comprehensive debugging and dry-run support
- Good separation of concerns with private methods
- Proper cost calculation and token estimation
Great work on the implementation!
| displayName: 'Google Gemini', | ||
| description: 'Google Gemini models', | ||
| status: 'implemented', | ||
| models: ['gemini-2.5-pro', 'gemini-2.5-flash', 'gemini-2.5-flash-lite'], |
There was a problem hiding this comment.
Model names inconsistency across files
The models listed here (gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite) don't match the models in config/gemini-pricing.json (gemini-1.5-flash-latest, gemini-1.5-pro-latest, gemini-pro). This will cause issues when users try to use the models shown in provider info.
Please ensure consistency between:
- The pricing configuration models
- The provider info models
- The actual supported models in GeminiProvider
Consider updating either the pricing config or this provider info to match.
🤖 Prompt for AI Agents
In src/providers/ProviderFactory.js at line 87, the model names listed are
inconsistent with those in config/gemini-pricing.json and the actual supported
models in GeminiProvider. To fix this, update the models array here to exactly
match the model names used in the pricing configuration and supported by
GeminiProvider, ensuring all three sources use the same model identifiers for
consistency.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/providers/gemini/GeminiProvider.js (1)
76-78: Temperature validation range mismatchThe temperature validation restricts values to 0-1, but the CLI configuration accepts 0-2. This will cause validation failures for valid temperature values above 1.0.
🧹 Nitpick comments (7)
src/providers/gemini/GeminiProvider.js (7)
238-239: Use optional chaining for cleaner codeSimplify the conditional check using optional chaining.
- if (this.providerPricing && this.providerPricing.models) { + if (this.providerPricing?.models) {
285-285: Fix typo in commentThere's a typographical error in the comment.
- // Use conservative.1.4x multiplier for Gemini. + // Use conservative 1.4x multiplier for Gemini.
301-302: Fix inconsistent indentationAlign the indentation for consistency.
- 'gemini-2.5-flash': { prompt: 0.000175, completion: 0.000525 }, - 'gemini-2.5-pro': { prompt: 0.0005, completion: 0.0015 }, + 'gemini-2.5-flash': { prompt: 0.000175, completion: 0.000525 }, + 'gemini-2.5-pro': { prompt: 0.0005, completion: 0.0015 },
439-440: Use optional chaining for debug configurationSimplify the conditional check using optional chaining.
- if (debugConfig && debugConfig.saveDebugInfo) { + if (debugConfig?.saveDebugInfo) {
755-759: Fix inconsistent comment formattingRemove the unnecessary periods after numbers in the comments for consistency.
- // Conservative estimate based on.: - // - Average translation length (50-80 tokens.) - // - XML formatting overhead (<translation id="1".>...</translation>) - // - Plural forms (may double the output.) - // - Some strings being longer than average. + // Conservative estimate based on: + // - Average translation length (50-80 tokens) + // - XML formatting overhead (<translation id="1">...</translation>) + // - Plural forms (may double the output) + // - Some strings being longer than average
105-105: Consider using parameter object patternThe
translateBatchmethod has 12 parameters, which makes it difficult to use and maintain. Consider creating aTranslationRequestobject to encapsulate these parameters.Example structure:
class TranslationRequest { constructor({ batch, targetLang, model, systemPrompt, maxRetries = 3, retryDelayMs = 1000, timeout = 30000, isDryRun = false, retryProgressCallback = null, debugConfig = null, pluralCount = 1 }) { // Initialize properties } }This would simplify the method signature and make it easier to add new parameters in the future.
456-457: Optimize token counting to avoid redundant calculationsToken counting is performed multiple times on the same content. Consider caching the prompt token count since the input doesn't change during retries.
+ // Calculate prompt tokens once before the retry loop + const fullPrompt = messages.map(m => m.content).join('\n'); + const promptTokens = await this.getTokenCount(fullPrompt, model); + for (let attempt = 0; attempt <= maxRetries; attempt++) { try { // ... existing code ... const usage = { - prompt_tokens: await this.getTokenCount(messages.map(m => m.content).join('\n'), model), + prompt_tokens: promptTokens, completion_tokens: await this.getTokenCount(responseText, model), };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
config/gemini-pricing.json(1 hunks)src/providers/ProviderFactory.js(4 hunks)src/providers/gemini/GeminiProvider.js(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- config/gemini-pricing.json
- src/providers/ProviderFactory.js
🧰 Additional context used
🪛 Biome (1.9.4)
src/providers/gemini/GeminiProvider.js
[error] 238-238: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 439-439: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
| async _makeApiCallWithRetries(messages, model, batch, maxRetries, retryDelayMs, retryProgressCallback = null, debugConfig = null, pluralCount = 1, dictionaryCount = 0) { | ||
| let lastError = null; | ||
|
|
||
| // Debug: Log complete conversation at verbose level.3. | ||
| this.logger.debug('=== FULL CONVERSATION WITH AI ==='); | ||
|
|
||
| messages.forEach((message, index) => { | ||
| this.logger.debug(`Message ${index + 1} (${message.role}):`); | ||
| this.logger.debug(message.content); | ||
| if (index < messages.length - 1) { | ||
| this.logger.debug('---'); | ||
| } | ||
| }); | ||
|
|
||
| this.logger.debug('=== END CONVERSATION ==='); | ||
|
|
||
| for (let attempt = 0; attempt <= maxRetries; attempt++) { | ||
| try { | ||
| // Notify progress callback about retry status. | ||
| this._notifyRetryProgress(retryProgressCallback, attempt, maxRetries); | ||
|
|
||
| if (attempt > 0) { | ||
| this.logger.info(`Retry attempt ${attempt}/${maxRetries} after ${retryDelayMs}ms delay`); | ||
|
|
||
| await new Promise((resolve) => setTimeout(resolve, retryDelayMs)); | ||
| } | ||
|
|
||
| // Handle test mode failure simulation. | ||
| this._handleTestModeFailures(attempt, maxRetries); | ||
|
|
||
| const chat = this.client.startChat({ | ||
| history: messages.slice(0, -1), | ||
| generationConfig: { | ||
| temperature: this.config.temperature || 0.1, | ||
| maxOutputTokens: this._calculateMaxTokens(model, batch.length), | ||
| } | ||
| }); | ||
|
|
||
| const result = await chat.sendMessage(messages[messages.length - 1].content); | ||
| const response = await result.response; | ||
| const responseText = response.text(); | ||
|
|
||
| // Debug: Log raw AI response at verbose level.3. | ||
| this.logger.debug('=== RAW AI RESPONSE ==='); | ||
| this.logger.debug(responseText); | ||
| this.logger.debug('=== END RAW RESPONSE ==='); | ||
|
|
||
| // Save debug files if enabled. | ||
| if (debugConfig && debugConfig.saveDebugInfo) { | ||
| await this._saveDebugFiles(messages, response, debugConfig, batch.length); | ||
| } | ||
|
|
||
| // Parse response. | ||
| const translations = this._parseApiResponse(responseText, batch, pluralCount, dictionaryCount); | ||
|
|
||
| // Debug: Log parsed translations at verbose level.3. | ||
| this.logger.debug('=== PARSED TRANSLATIONS ==='); | ||
|
|
||
| translations.forEach((translation, index) => { | ||
| this.logger.debug(`${index + 1}. "${translation.msgid}" → ${JSON.stringify(translation.msgstr)}`); | ||
| }); | ||
|
|
||
| this.logger.debug('=== END PARSED TRANSLATIONS ==='); | ||
|
|
||
| const usage = { | ||
| prompt_tokens: await this.getTokenCount(messages.map(m => m.content).join('\n'), model), | ||
| completion_tokens: await this.getTokenCount(responseText, model), | ||
| }; | ||
| usage.total_tokens = usage.prompt_tokens + usage.completion_tokens; | ||
|
|
||
| const cost = this.calculateCost(usage, model); | ||
|
|
||
| // Notify progress callback that we're no longer retrying. | ||
| this._notifyRetryProgress(retryProgressCallback, attempt, maxRetries, false); | ||
|
|
||
| return { | ||
| success: true, | ||
| translations, | ||
| usage, | ||
| cost, | ||
| isDryRun: false, | ||
| debugData: { | ||
| messages, | ||
| response: responseText, | ||
| }, | ||
| dictionaryCount, | ||
| }; | ||
| } catch (error) { | ||
| lastError = error; | ||
|
|
||
| this.logger.warn(`API call attempt ${attempt + 1} failed: ${error.message}`); | ||
|
|
||
| // Don't retry on certain errors. | ||
| if (this._shouldStopRetrying(error)) { | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Final progress callback update to clear retry status. | ||
| this._notifyRetryProgress(retryProgressCallback, maxRetries, maxRetries, false); | ||
|
|
||
| return { | ||
| success: false, | ||
| error: `Failed after ${maxRetries + 1} attempts. Last error: ${lastError.message}`, | ||
| translations: [], | ||
| cost: { totalCost: 0 }, | ||
| dictionaryCount, | ||
| }; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Refactor large method to improve maintainability
This method is 109 lines long and handles multiple responsibilities. Consider extracting the following into separate methods:
- Debug logging (lines 394-406)
- API call logic (lines 421-437)
- Response parsing and cost calculation (lines 443-477)
This will improve readability, testability, and adherence to the Single Responsibility Principle.
Would you like me to help refactor this method into smaller, more focused methods?
🧰 Tools
🪛 Biome (1.9.4)
[error] 439-439: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🤖 Prompt for AI Agents
In src/providers/gemini/GeminiProvider.js from lines 391 to 500, the
_makeApiCallWithRetries method is too long and handles multiple tasks. Refactor
by extracting the debug logging of messages (lines 394-406) into a separate
method, the API call logic including retry and delay handling (lines 421-437)
into another method, and the response parsing plus cost calculation (lines
443-477) into a third method. Then update _makeApiCallWithRetries to call these
new methods sequentially, improving readability and maintainability.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
src/providers/gemini/GeminiProvider.js (2)
391-500: Large method should be refactored (duplicate issue).This method handles multiple responsibilities and is 109 lines long. This issue was previously identified.
76-78: Temperature validation range fixed (duplicate issue).The temperature validation range has already been updated to 0-2 as indicated by the past review comment.
🧹 Nitpick comments (4)
src/providers/gemini/GeminiProvider.js (4)
238-238: Apply optional chaining for safer property access.The current code uses explicit null checks which can be simplified with optional chaining as suggested by the static analysis tool.
- if (this.providerPricing && this.providerPricing.models) { - return Object.keys(this.providerPricing.models).sort(); - } + if (this.providerPricing?.models) { + return Object.keys(this.providerPricing.models).sort(); + }
285-285: Fix comment formatting.There's a formatting issue in the comment where a period appears before the multiplier value.
- // Use conservative.1.4x multiplier for Gemini. + // Use conservative 1.4x multiplier for Gemini.
300-305: Fix inconsistent indentation in fallback pricing object.The spacing is inconsistent in the pricing object, affecting code readability.
return { models: { 'gemini-2.5-flash': { prompt: 0.000175, completion: 0.000525 }, - 'gemini-2.5-pro': { prompt: 0.0005, completion: 0.0015 }, - + 'gemini-2.5-pro': { prompt: 0.0005, completion: 0.0015 }, }, fallback: { prompt: 0.0005, completion: 0.0015 }, };
439-439: Apply optional chaining for safer property access.The static analysis tool suggests using optional chaining for safer property access.
- if (debugConfig && debugConfig.saveDebugInfo) { + if (debugConfig?.saveDebugInfo) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/providers/gemini/GeminiProvider.js(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/providers/gemini/GeminiProvider.js (3)
src/providers/base/Provider.js (1)
Provider(11-194)src/utils/dictionaryUtils.js (2)
loadDictionary(26-90)findDictionaryMatches(103-141)src/utils/xmlTranslation.js (3)
buildXmlPrompt(22-79)buildDictionaryResponse(91-93)parseXmlResponse(108-191)
🪛 Biome (1.9.4)
src/providers/gemini/GeminiProvider.js
[error] 238-238: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 439-439: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (4)
src/providers/gemini/GeminiProvider.js (4)
1-14: LGTM: Clean imports and class structure.The imports are well-organized and the class structure properly extends the base Provider class with appropriate JSDoc documentation.
39-52: LGTM: Proper initialization logic.The initialization method correctly handles API key validation, client setup, and pricing loading with appropriate error handling.
164-202: LGTM: Robust cost calculation implementation.The cost calculation method properly handles edge cases with comprehensive error checking and fallback values.
456-459: No async issues in token counting; awaits are correct.The code builds the full prompt string synchronously with
messages.map(...).join('\n'), then callsawait this.getTokenCount(...)for both prompt and completion. Each promise is properly awaited before assigning tousage, so there’s no risk of unawaited async operations here.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/providers/gemini/GeminiProvider.js (1)
377-504: The method remains too long and handles multiple responsibilities.This 127-line method was flagged in previous reviews for being overly complex but hasn't been refactored. It handles debug logging, retry logic, API calls, response parsing, and cost calculation.
Consider extracting these into separate methods:
- Debug conversation logging (lines 398-409)
- API call execution with failure simulation (lines 422-440)
- Response processing and cost calculation (lines 447-481)
This will improve testability, readability, and maintainability.
🧹 Nitpick comments (5)
src/providers/gemini/GeminiProvider.js (5)
242-248: Consider using optional chaining for cleaner code.The conditional check can be simplified using optional chaining as suggested by static analysis.
- if (this.providerPricing && this.providerPricing.models) { + if (this.providerPricing?.models) { return Object.keys(this.providerPricing.models).sort(); }
290-290: Fix typo in comment.Remove the stray period in the comment.
- // Use conservative.1.4x multiplier for Gemini. + // Use conservative 1.4x multiplier for Gemini.
306-308: Fix inconsistent indentation.The indentation is inconsistent in the pricing structure.
'gemini-2.5-flash': { prompt: 0.000175, completion: 0.000525 }, - 'gemini-2.5-pro': { prompt: 0.0005, completion: 0.0015 }, + 'gemini-2.5-pro': { prompt: 0.0005, completion: 0.0015 },
443-443: Use optional chaining for cleaner code.The conditional check can be simplified as suggested by static analysis.
- if (debugConfig && debugConfig.saveDebugInfo) { + if (debugConfig?.saveDebugInfo) { await this._saveDebugFiles(messages, response, debugConfig, batch.length); }
759-764: Fix comment formatting issues.The comments have formatting inconsistencies with periods and structure.
- // Conservative estimate based on.: - // - Average translation length (50-80 tokens.) - // - XML formatting overhead (<translation id="1".>...</translation>) - // - Plural forms (may double the output.) - // - Some strings being longer than average. + // Conservative estimate based on: + // - Average translation length (50-80 tokens) + // - XML formatting overhead (<translation id="1">...</translation>) + // - Plural forms (may double the output) + // - Some strings being longer than average
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/providers/gemini/GeminiProvider.js(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
src/providers/gemini/GeminiProvider.js
[error] 243-243: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 443-443: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (3)
src/providers/gemini/GeminiProvider.js (3)
214-232: LGTM: Client initialization check properly implemented.The method correctly handles the case where the client is not initialized and provides appropriate fallback behavior, addressing the previous review concern.
460-460: LGTM: Null safety properly implemented.The code correctly handles potential null/undefined message content using the nullish coalescing operator, addressing the previous review concern.
334-334: LGTM: Token estimation properly fixed.The method now correctly uses
inputTokensfor output estimation instead of the previously incorrectmessages[1].content, addressing the previous review concern.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/providers/gemini/GeminiProvider.js (1)
242-248: Consider using optional chaining for cleaner code.The logic is correct, but you can simplify using optional chaining as suggested by static analysis.
- if (this.providerPricing && this.providerPricing.models) { - return Object.keys(this.providerPricing.models).sort(); - } + return Object.keys(this.providerPricing?.models || {}).sort() || ['gemini-2.5-pro', 'gemini-2.5-flash'];
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/providers/gemini/GeminiProvider.js(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/providers/gemini/GeminiProvider.js (3)
src/providers/base/Provider.js (1)
Provider(11-194)src/utils/dictionaryUtils.js (2)
loadDictionary(26-90)findDictionaryMatches(103-141)src/utils/xmlTranslation.js (3)
buildXmlPrompt(22-79)buildDictionaryResponse(91-93)parseXmlResponse(108-191)
🪛 Biome (1.9.4)
src/providers/gemini/GeminiProvider.js
[error] 243-243: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 449-449: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (13)
src/providers/gemini/GeminiProvider.js (13)
1-27: LGTM - Clean class structure and imports.The imports are well-organized and the class properly extends the base Provider with appropriate initialization.
39-52: LGTM - Proper initialization with dry-run support.The initialization correctly handles both dry-run and live modes, with appropriate API key validation.
63-84: LGTM - Comprehensive configuration validation.The validation properly checks API key requirements, model support, and temperature ranges. The temperature validation correctly allows 0-2 range as noted in previous reviews.
105-152: LGTM - Well-structured translation method.The method properly handles dictionary integration, message construction, and routing between dry-run and live modes. The dictionary instruction formatting ensures consistent translations.
164-202: LGTM - Robust cost calculation with error handling.The method properly validates usage data and handles edge cases, providing detailed cost breakdowns with appropriate error messages.
214-232: LGTM - Improved token counting with safety checks.The method now includes proper client initialization validation as noted in previous reviews, with appropriate fallback logic for when the client is unavailable.
259-312: LGTM - Solid pricing and provider methods.The pricing methods provide appropriate fallbacks and the estimateOutputTokens uses a reasonable multiplier for Gemini models.
328-375: LGTM - Comprehensive dry-run implementation.The dry-run handler properly estimates costs and returns structured results compatible with the live API response format, enabling consistent downstream processing.
395-510: LGTM - Comprehensive API call implementation with retry logic.The method includes robust retry logic, debug logging, and error handling. While previous reviews suggested refactoring for length, the current implementation is comprehensive and includes the null safety fixes for message content.
522-598: LGTM - Well-designed retry helper methods.The helper methods are focused and handle retry notifications, error evaluation, and test mode simulation effectively. The test failure simulation includes realistic error scenarios.
636-648: LGTM - Clean response parsing with fallback.The method properly delegates to the XML parsing utility and provides appropriate fallback behavior for parsing failures.
665-771: LGTM - Comprehensive debug and token calculation utilities.The debug file saving includes detailed metadata and the token calculation methods provide intelligent auto-calculation with safety bounds. The implementation is thorough and well-documented.
1-772: Excellent comprehensive implementation of Gemini AI provider.This implementation demonstrates:
- Robust error handling and retry logic
- Comprehensive cost calculation and token management
- Dictionary integration for consistent translations
- Debug capabilities and test mode simulation
- Proper integration with the existing provider architecture
The previous critical issues noted in past reviews have been successfully addressed, including temperature validation, null safety for message content, and client initialization checks.
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
209-209:⚠️ Potential issue | 🟡 MinorHelp text is now inconsistent with
src/config/index.js.The CLI help on Line 220 of
src/config/index.jswas updated to "Provider API key (overrides API_KEY env var)", but this README still says "OpenAI API key (overridesAPI_KEYenv var)". Update for consistency now that--api-keycovers Gemini too.-| `--api-key <key>` | `-k` | OpenAI API key (overrides `API_KEY` env var) | - | +| `--api-key <key>` | `-k` | Provider API key (overrides `API_KEY` env var) | - |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 209, Update the README table entry for `--api-key` so its help text matches the CLI help in src/config/index.js: change "OpenAI API key (overrides `API_KEY` env var)" to "Provider API key (overrides `API_KEY` env var)" for the `--api-key <key>` / `-k` row; ensure the exact phrase `Provider API key (overrides API_KEY env var)` is used to keep wording consistent with the CLI help.
♻️ Duplicate comments (1)
src/providers/ProviderFactory.js (1)
85-96:⚠️ Potential issue | 🟡 Minor
getProviderInfo()lists only 2 of the 5 Gemini models actually supported.
config/gemini-pricing.jsondefines pricing forgemini-2.5-pro,gemini-2.5-flash,gemini-3.1-pro-preview,gemini-flash-latest, andgemini-3.1-flash-lite-preview, andGeminiProvider.getSupportedModels()returns all five (sorted). The provider‑info list shown to users via this metadata only advertises two. IfgetProviderInfo()is the canonical place users learn which models exist, derive it from the same source the provider uses.{ name: 'gemini', displayName: 'Google Gemini', description: 'Google Gemini models', status: 'implemented', - models: ['gemini-2.5-pro', 'gemini-2.5-flash'], + models: [ + 'gemini-2.5-pro', + 'gemini-2.5-flash', + 'gemini-3.1-pro-preview', + 'gemini-flash-latest', + 'gemini-3.1-flash-lite-preview', + ], configExample: { provider: 'gemini', apiKey: 'your-gemini-api-key', model: 'gemini-2.5-flash', }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/ProviderFactory.js` around lines 85 - 96, The provider info currently hardcodes only two Gemini models in the models array inside getProviderInfo(), but GeminiProvider.getSupportedModels() (and the gemini pricing data) include five; modify getProviderInfo() so the Gemini entry's models list is derived from the canonical source instead of a hardcoded array—call GeminiProvider.getSupportedModels() (or read the same pricing data source), sort/format the returned model names as needed, and use that result for the models field (keep displayName/description/status as-is).
🧹 Nitpick comments (2)
src/providers/gemini/GeminiProvider.js (1)
242-265: Hardcoded fallback model list and pricing can drift fromconfig/gemini-pricing.json.
getSupportedModels()(Line 247) andgetModelPricing()(Line 261) duplicate model identifiers / prices that already live inconfig/gemini-pricing.jsonand_getFallbackPricing(). As a result, when pricing JSON loads correctly the fallbacks are dead, and when it fails the two paths can diverge silently. HavegetModelPricingdelegate to_getFallbackPricing()(already the right shape):getModelPricing(model) { if (!this.providerPricing) { - return { prompt: 0.0005, completion: 0.0015 }; + const fb = this._getFallbackPricing(); + return fb.models[model] || fb.fallback; } return this.providerPricing.models[model] || this.providerPricing.fallback; }And let
getSupportedModels()derive from_getFallbackPricing()whenproviderPricingis null, so adding a model only requires touching one place.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/gemini/GeminiProvider.js` around lines 242 - 265, Replace hardcoded fallbacks by delegating to the existing _getFallbackPricing(): update getSupportedModels() so when this.providerPricing is falsy it returns Object.keys(this._getFallbackPricing().models).sort() instead of the hardcoded array, and update getModelPricing(model) to return this.providerPricing ? (this.providerPricing.models[model] || this.providerPricing.fallback) : this._getFallbackPricing().models[model] || this._getFallbackPricing().fallback so all fallback model IDs and prices come from _getFallbackPricing() rather than duplicated literals.src/providers/google-translate/GoogleTranslateProvider.js (1)
314-321: Unreachable fallbackreturnat end oftranslateItem.The
while (attempts <= maxRetries)loop only exits viareturninside thetryblock or theif (attempts > maxRetries)branch incatch. Reaching the bottom is impossible. Drop the dead code or replace with an explicitthrow new Error('unreachable')so a future refactor that breaks the invariant fails loudly rather than silently returning an emptymsgstr.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/google-translate/GoogleTranslateProvider.js` around lines 314 - 321, The fallback return at the end of translateItem is dead code because the while (attempts <= maxRetries) loop always returns or throws; remove that silent fallback and replace it with an explicit throw (e.g. throw new Error('unreachable: translateItem exceeded control flow invariants')) so future refactors fail loudly; update the function translateItem (the loop using attempts and maxRetries) to remove the final return and instead throw an Error with a descriptive message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/gemini-pricing.json`:
- Around line 2-27: The pricing constants in config/gemini-pricing.json are
materially lower than Google's published April 2026 Gemini rates, which will
make GeminiProvider.calculateCost() and dry-run/--max-cost checks inaccurate;
update each model entry (e.g., "gemini-2.5-pro", "gemini-2.5-flash",
"gemini-3.1-pro-preview", "gemini-flash-latest",
"gemini-3.1-flash-lite-preview") and the "fallback" values to the official
per‑1K token prompt and completion prices from Google's Gemini pricing page, and
add/annotate the long‑context (>200K) tier (2× multiplier) for 2.5 Pro and 3.1
Pro where appropriate so cost calculations and README tables match production
billing and enforcement performed by GeminiProvider.calculateCost().
In `@README.md`:
- Around line 300-306: The README's static pricing table is out of sync with
config/gemini-pricing.json; update README.md so the per‑1K prompt and completion
rates exactly match the values in config/gemini-pricing.json (or replace the
hardcoded table with a build step or script that generates the table from that
JSON at docs build time); locate and change the table rows for the Gemini models
(the current entries for gemini-3.1-pro-preview, gemini-2.5-pro,
gemini-2.5-flash, gemini-flash-latest, gemini-3.1-flash-lite-preview) to reflect
the JSON or implement the generation approach to avoid future drift.
- Around line 350-353: The README mentions GOOGLE_API_KEY and
GOOGLE_CLOUD_API_KEY but the code only reads API_KEY; update
parseEnvironmentConfig() / createConfiguration() to accept provider-specific env
fallbacks by reading GOOGLE_API_KEY and GOOGLE_CLOUD_API_KEY (via envSchema or
explicit process.env reads) and merging them into config.apiKey when API_KEY is
absent, or alternatively update envSchema to include those keys mapped into the
same config.apiKey value so validation succeeds; look for the functions
parseEnvironmentConfig(), createConfiguration(), and the config.apiKey usage to
implement the fallback logic.
In `@src/config/index.js`:
- Around line 305-315: The provider comparisons in this block are case-sensitive
while ProviderFactory.createProvider lowercases provider names, causing
mismatches; normalize options.provider (e.g., use options.provider &&
options.provider.toLowerCase()) before comparing to 'google-translate' so both
the API key branch and the provider-specific branch behave consistently with
ProviderFactory.createProvider, and remove the redundant source-language
presence check for Google Translate (options.sourceLanguage /
DEFAULTS.SOURCE_LANGUAGE) since parseCliArguments already applies a default and
ProviderFactory.createAndValidateProvider enforces required fields—either drop
the kludgy validation here or change parseCliArguments/DEFAULTS handling so
absence is detectable if you prefer keeping this check.
In `@src/providers/gemini/GeminiProvider.js`:
- Around line 544-546: Update the _shouldStopRetrying method to treat 400 and
404 as non-retriable in addition to 401 and 403: inside
GeminiProvider._shouldStopRetrying, check error.status and return true for
statuses 400, 401, 403, and 404 (while leaving 429 and 5xx to return false so
they continue to be retried).
- Around line 700-704: The debug object is computing systemPromptLength and
userMessageLength using fixed indices which are wrong when seeded turns or
inserted dictionary matches change message order; update the logic in
GeminiProvider (around where the debug object sets systemPromptLength and
userMessageLength) to locate the system message by role === 'system' (take its
content.length) and the user message by the last message with role === 'user'
(take its content.length), add safe fallbacks (0) if those messages are missing,
and replace the direct uses of messages[0] and messages[1] so lengths reflect
the actual roles rather than fixed indices.
- Around line 289-292: Fix the typo in the inline comment/doc for
estimateOutputTokens ("conservative.1.4x" → "conservative 1.4x") and remove the
unused targetLang parameter from the estimateOutputTokens signature and any
related docs; update all callers that pass a second argument to call
estimateOutputTokens(inputTokens) instead. Alternatively, if you prefer
language-sensitive sizing, implement a small language multiplier map inside
estimateOutputTokens (e.g., {zh:1.1, en:1.4, de:1.5, ru:1.5}) and use targetLang
to pick the multiplier—ensure the comment reflects the chosen behavior and
remove the unused-parameter warning accordingly.
In `@src/providers/google-translate/GoogleTranslateProvider.js`:
- Around line 101-108: calculateCost currently ignores the model parameter and
returns a minimal object that breaks parity with other providers; update
GoogleTranslateProvider.calculateCost to use the passed-in model value for the
model field and return the full shape used by GeminiProvider/OpenAIProvider:
include promptTokens, completionTokens, totalTokens (sum of the two),
promptCost, completionCost, totalCost (sum of costs), and pricingUsed (e.g.,
"google-translate" or a pricing object/identifier), defaulting token/count/cost
values to 0 if you cannot compute them; this keeps callers that destructure
those fields from receiving undefined.
- Around line 460-466: The _shouldStopRetrying helper is dead code and always
returns false; update it to classify errors (return true for non-retriable
client errors like HTTP 4xx auth/permission errors and false for retriable
transient errors like timeouts/5xx), then call it from the catch block inside
_makeApiCallWithRetries so the retry loop breaks and rethrows when
_shouldStopRetrying(error) is true; specifically modify _shouldStopRetrying to
inspect error.status / error.code / error.message for 4xx vs 5xx/timeout and add
a call to _shouldStopRetrying(error) in the catch to stop retrying and throw the
original error when it signals non-retriable.
- Around line 277-285: The current plural handling replaces the singular
translation by filling all msgstr slots with the pluralTranslation; change it so
msgstr[0] remains the singular (translation) and only msgstr[1..pluralCount-1]
are populated with the pluralTranslation obtained via
this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs).
Locate the block using variables msgstr, item.msgid_plural, pluralCount and the
call to _translateText and update the logic to initialize msgstr = [translation]
and, when pluralCount > 1, set only the remaining slots (indices
1..pluralCount-1) to the pluralTranslation rather than overwriting msgstr[0].
- Around line 259-265: The timeout unit detection in _makeApiCallWithRetries is
fragile; remove the magnitude heuristic and treat the timeout parameter as
seconds (consistent with project config), update the JSDoc for
_makeApiCallWithRetries to state "timeout is in seconds", and convert explicitly
to milliseconds inside the function with const timeoutMs = (typeof timeout ===
'number') ? timeout * 1000 : undefined; (or validate and throw if not a number);
also update any tests or call sites that pass raw milliseconds (e.g., change
30000 to 30) or adjust the config-reading code to perform the conversion once
when loading TIMEOUT so this function no longer guesses units.
- Around line 267-321: translateItem ignores the retryDelayMs parameter and uses
a hardcoded 500ms sleep only for timeout/429 errors, causing immediate retries
for other errors; update translateItem to use the configured retryDelayMs
(passed into the surrounding scope) and apply exponential backoff for 429s by
calling this._sleep(retryDelayMs * 2 ** (attempts-1)) (or
this._sleep(retryDelayMs) for other transient errors) instead of the fixed 500,
ensure retries occur after every failed attempt before continuing the loop, and
keep existing behavior around maxRetries, attempts, _sleep, and plural handling
(msgstr, pluralCount) intact.
- Around line 11-37: The GoogleTranslateProvider currently uses the undocumented
baseUrl 'https://translate.googleapis.com/translate_a/single' which violates
Google ToS and is unreliable; update the provider to either 1) switch to the
official API by replacing this.baseUrl with
'https://translation.googleapis.com/language/translate/v2', require and validate
an API key in constructor(config, logger), and adjust authentication/parameters
everywhere that builds requests; or 2) if you must keep the unofficial endpoint,
rename the class (e.g., GoogleTranslateBestEffortProvider) and in initialize()
call this.logger.warn(...) to document ToS risk, lack of SLA, and free
best-effort nature, and also surface the mismatch with README pricing;
additionally, fix reliability gaps referenced by the reviewer by ensuring
retryDelayMs parameter (used around the 500ms hardcode) is honored, make
_shouldStopRetrying() used by the retry loop, and correct the timeout heuristic
and plural-array logic where mentioned.
In `@tests/integration/google-translate.test.js`:
- Around line 137-140: The test currently asserts a hard-coded URL string which
fails because GoogleTranslateProvider._buildUrl uses URLSearchParams and encodes
spaces as '+'; update the test in google-translate.test.js to avoid exact
serialized encoding by either (a) changing the expected query portion to use
'q=Hello+World' to match URLSearchParams, or preferably (b) parse the returned
URL with new URL(url) and assert URL.searchParams.get('q') === 'Hello World'
(and separately assert sl= 'en' and tl= 'fr' if desired) so the test isn't tied
to the serializer's space-encoding choice.
In `@tests/unit/googleTranslateProvider.test.js`:
- Around line 129-134: The test expecting 'Helloworld' uses the wrong mock
shape; update the test for the provider._parseResponse function so the mocked
response matches Google Translate's segment format (each segment is an array
whose first element is the translated text). For example, replace the mockData
[[['Hello','world']]] with an array of segments like [[['Hello','orig'],
['world','orig2']]] and assert result === 'Helloworld', or alternatively keep
the original mock and change the expectation to 'Hello' to match current
implementation; adjust the it(...) description accordingly.
---
Outside diff comments:
In `@README.md`:
- Line 209: Update the README table entry for `--api-key` so its help text
matches the CLI help in src/config/index.js: change "OpenAI API key (overrides
`API_KEY` env var)" to "Provider API key (overrides `API_KEY` env var)" for the
`--api-key <key>` / `-k` row; ensure the exact phrase `Provider API key
(overrides API_KEY env var)` is used to keep wording consistent with the CLI
help.
---
Duplicate comments:
In `@src/providers/ProviderFactory.js`:
- Around line 85-96: The provider info currently hardcodes only two Gemini
models in the models array inside getProviderInfo(), but
GeminiProvider.getSupportedModels() (and the gemini pricing data) include five;
modify getProviderInfo() so the Gemini entry's models list is derived from the
canonical source instead of a hardcoded array—call
GeminiProvider.getSupportedModels() (or read the same pricing data source),
sort/format the returned model names as needed, and use that result for the
models field (keep displayName/description/status as-is).
---
Nitpick comments:
In `@src/providers/gemini/GeminiProvider.js`:
- Around line 242-265: Replace hardcoded fallbacks by delegating to the existing
_getFallbackPricing(): update getSupportedModels() so when this.providerPricing
is falsy it returns Object.keys(this._getFallbackPricing().models).sort()
instead of the hardcoded array, and update getModelPricing(model) to return
this.providerPricing ? (this.providerPricing.models[model] ||
this.providerPricing.fallback) : this._getFallbackPricing().models[model] ||
this._getFallbackPricing().fallback so all fallback model IDs and prices come
from _getFallbackPricing() rather than duplicated literals.
In `@src/providers/google-translate/GoogleTranslateProvider.js`:
- Around line 314-321: The fallback return at the end of translateItem is dead
code because the while (attempts <= maxRetries) loop always returns or throws;
remove that silent fallback and replace it with an explicit throw (e.g. throw
new Error('unreachable: translateItem exceeded control flow invariants')) so
future refactors fail loudly; update the function translateItem (the loop using
attempts and maxRetries) to remove the final return and instead throw an Error
with a descriptive message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bd3b401a-526e-4265-a044-04f3f75aac50
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (10)
README.mdconfig/gemini-pricing.jsonconfig/google-translate-pricing.jsonsrc/config/index.jssrc/providers/ProviderFactory.jssrc/providers/gemini/GeminiProvider.jssrc/providers/google-translate/GoogleTranslateProvider.jssrc/utils/promptLoader.jstests/integration/google-translate.test.jstests/unit/googleTranslateProvider.test.js
✅ Files skipped from review due to trivial changes (2)
- config/google-translate-pricing.json
- src/utils/promptLoader.js
| "models": { | ||
| "gemini-2.5-pro": { | ||
| "prompt": 0.0005, | ||
| "completion": 0.0015 | ||
| }, | ||
| "gemini-2.5-flash": { | ||
| "prompt": 0.000175, | ||
| "completion": 0.000525 | ||
| }, | ||
| "gemini-3.1-pro-preview": { | ||
| "prompt": 0.00125, | ||
| "completion": 0.005 | ||
| }, | ||
| "gemini-flash-latest": { | ||
| "prompt": 0.000175, | ||
| "completion": 0.000525 | ||
| }, | ||
| "gemini-3.1-flash-lite-preview": { | ||
| "prompt": 0.0000375, | ||
| "completion": 0.00015 | ||
| } | ||
| }, | ||
| "fallback": { | ||
| "prompt": 0.0005, | ||
| "completion": 0.0015 | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Gemini API pricing per 1M tokens 2026 official: gemini-2.5-pro, gemini-2.5-flash, gemini-3.1-pro-preview, gemini-flash-latest, gemini-3.1-flash-lite-preview
💡 Result:
Gemini API Paid Tier pricing per 1M tokens (USD) as of April 2026 from official Google AI for Developers documentation (standard tier, text/image/video input unless noted; output includes thinking tokens; audio input higher where specified; Pro models tiered by prompt length <=200K / >200K tokens): - gemini-2.5-pro: Input $1.25 / $2.50, Output $10.00 / $15.00 - gemini-2.5-flash: Input $0.30 (text/image/video), $1.00 (audio); Output $2.50 - gemini-3.1-pro-preview: Input $2.00 / $4.00, Output $12.00 / $18.00 - gemini-flash-latest: Likely maps to latest stable Flash such as gemini-2.5-flash or gemini-3-flash-preview at Input $0.30-$0.50 / Output $2.50-$3.00 (exact alias pricing follows underlying model; no explicit separate rate found) - gemini-3.1-flash-lite-preview: Input $0.25 (text/image/video), $0.50 (audio); Output $1.50 Free tier available with limits (e.g., requests per day). Context caching and grounding extra. Batch/Priority tiers offer discounts/multipliers. Prices from ai.google.dev/gemini-api/docs/pricing (results 1,17+), Vertex AI docs (4,9,10), and consistent secondary confirmations (2,20,23,26,29). Vertex AI matches Developer API standard rates. "gemini-flash-latest" points to latest Flash release (e.g., 2.5 Flash per models docs), using 2.5 Flash rates conservatively as most cited stable.
Citations:
- 1: https://ai.google.dev/gemini-api/docs/pricing
- 2: https://docs.cloud.google.com/vertex-ai/generative-ai/pricing
- 3: https://cloud.google.com/gemini-enterprise-agent-platform/generative-ai/pricing
- 4: https://cloud.google.com/vertex-ai/generative-ai/pricing
- 5: https://devtk.ai/en/models/gemini-3-1-pro/
- 6: https://benchlm.ai/blog/posts/gemini-api-pricing
- 7: https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-lite
- 8: https://developers.googleblog.com/gemini-2-5-thinking-model-updates/
- 9: https://devtk.ai/en/models/gemini-3-1-flash-lite/
- 10: https://ai.prygn.com/model/google/gemini-3.1-flash-lite-preview
- 11: https://ai.google.dev/gemini-api/docs/models
Pricing values are significantly below Google's published rates — cost estimates and --max-cost enforcement will be inaccurate.
Per Google's official Gemini API pricing (April 2026), the per‑1K-token costs in this file are 2–10× lower than reality:
| Model | Official (per 1K) | This file (per 1K) |
|---|---|---|
| gemini-2.5-pro | $0.00125 / $0.01 | $0.0005 / $0.0015 |
| gemini-2.5-flash | $0.0003 / $0.0025 | $0.000175 / $0.000525 |
| gemini-3.1-pro-preview | $0.002 / $0.012 | $0.00125 / $0.005 |
| gemini-3.1-flash-lite-preview | $0.00025 / $0.0015 | $0.0000375 / $0.00015 |
Since GeminiProvider.calculateCost() and dry‑run estimation read directly from this config (and the README pricing tables mirror it), a user who sets --max-cost 5.00 could be billed multiples of that on real runs. Note also that 2.5 Pro and 3.1 Pro have a long‑context (>200K tokens) tier at 2× — worth documenting if not modeling.
📐 Corrected values (from official ai.google.dev/gemini-api/docs/pricing)
"models": {
"gemini-2.5-pro": {
- "prompt": 0.0005,
- "completion": 0.0015
+ "prompt": 0.00125,
+ "completion": 0.01
},
"gemini-2.5-flash": {
- "prompt": 0.000175,
- "completion": 0.000525
+ "prompt": 0.0003,
+ "completion": 0.0025
},
"gemini-3.1-pro-preview": {
- "prompt": 0.00125,
- "completion": 0.005
+ "prompt": 0.002,
+ "completion": 0.012
},
"gemini-flash-latest": {
- "prompt": 0.000175,
- "completion": 0.000525
+ "prompt": 0.0003,
+ "completion": 0.0025
},
"gemini-3.1-flash-lite-preview": {
- "prompt": 0.0000375,
- "completion": 0.00015
+ "prompt": 0.00025,
+ "completion": 0.0015
}
},
"fallback": {
- "prompt": 0.0005,
- "completion": 0.0015
+ "prompt": 0.00125,
+ "completion": 0.01
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/gemini-pricing.json` around lines 2 - 27, The pricing constants in
config/gemini-pricing.json are materially lower than Google's published April
2026 Gemini rates, which will make GeminiProvider.calculateCost() and
dry-run/--max-cost checks inaccurate; update each model entry (e.g.,
"gemini-2.5-pro", "gemini-2.5-flash", "gemini-3.1-pro-preview",
"gemini-flash-latest", "gemini-3.1-flash-lite-preview") and the "fallback"
values to the official per‑1K token prompt and completion prices from Google's
Gemini pricing page, and add/annotate the long‑context (>200K) tier (2×
multiplier) for 2.5 Pro and 3.1 Pro where appropriate so cost calculations and
README tables match production billing and enforcement performed by
GeminiProvider.calculateCost().
| | Model | Prompt Cost | Completion Cost | Best For | | ||
| |-------|-------------|-----------------|----------| | ||
| | `gemini-3.1-pro-preview` | $0.00125/1K tokens | $0.005/1K tokens | Complex translations, highest quality | | ||
| | `gemini-2.5-pro` | $0.0005/1K tokens | $0.0015/1K tokens | High-quality general use | | ||
| | `gemini-2.5-flash` | $0.000175/1K tokens | $0.000525/1K tokens | Fast, cost-effective translations | | ||
| | `gemini-flash-latest` | $0.000175/1K tokens | $0.000525/1K tokens | Latest optimized flash model | | ||
| | `gemini-3.1-flash-lite-preview` | $0.0000375/1K tokens | $0.00015/1K tokens | Ultra-budget translations | |
There was a problem hiding this comment.
Pricing in this table will be wrong as soon as you fix config/gemini-pricing.json.
This README table mirrors the per‑1K rates from config/gemini-pricing.json, which I flagged as significantly below Google’s real pricing. Whatever values you settle on in the config file should be reflected here verbatim to avoid the README diverging from runtime cost calculations. Consider generating this table from the JSON to prevent future drift.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 300 - 306, The README's static pricing table is out
of sync with config/gemini-pricing.json; update README.md so the per‑1K prompt
and completion rates exactly match the values in config/gemini-pricing.json (or
replace the hardcoded table with a build step or script that generates the table
from that JSON at docs build time); locate and change the table rows for the
Gemini models (the current entries for gemini-3.1-pro-preview, gemini-2.5-pro,
gemini-2.5-flash, gemini-flash-latest, gemini-3.1-flash-lite-preview) to reflect
the JSON or implement the generation approach to avoid future drift.
| **Environment Variables:** | ||
| - `GOOGLE_API_KEY` - For Gemini provider | ||
| - `GOOGLE_CLOUD_API_KEY` - For Google Translate provider | ||
| - `API_KEY` - Falls back to OpenAI |
There was a problem hiding this comment.
Documented GOOGLE_API_KEY / GOOGLE_CLOUD_API_KEY env vars are not actually wired up.
src/config/index.js only reads API_KEY (via envSchema) and merges that into config.apiKey. There is no provider‑specific env var fallback, so users who follow this section and set GOOGLE_API_KEY=... will hit “API key required” at validation time. Either add the env‑var fallbacks in parseEnvironmentConfig() / createConfiguration(), or correct the README to say only API_KEY works for all providers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 350 - 353, The README mentions GOOGLE_API_KEY and
GOOGLE_CLOUD_API_KEY but the code only reads API_KEY; update
parseEnvironmentConfig() / createConfiguration() to accept provider-specific env
fallbacks by reading GOOGLE_API_KEY and GOOGLE_CLOUD_API_KEY (via envSchema or
explicit process.env reads) and merging them into config.apiKey when API_KEY is
absent, or alternatively update envSchema to include those keys mapped into the
same config.apiKey value so validation succeeds; look for the functions
parseEnvironmentConfig(), createConfiguration(), and the config.apiKey usage to
implement the fallback logic.
| if (!options.dryRun && !options.apiKey && !process.env.API_KEY) { | ||
| errors.push('🔑 API key required (set API_KEY env var, use --api-key, or try --dry-run)'); | ||
| // Google Translate doesn't require an API key | ||
| if (options.provider !== 'google-translate') { | ||
| errors.push('🔑 API key required (set API_KEY env var, use --api-key, or try --dry-run)'); | ||
| } | ||
| } | ||
|
|
||
| // Google Translate requires source language | ||
| if (options.provider === 'google-translate' && !options.sourceLanguage && !process.env.SOURCE_LANGUAGE) { | ||
| errors.push('🌐 Source language required for Google Translate (use -s or --source-language)'); | ||
| } |
There was a problem hiding this comment.
Provider comparisons here are case‑sensitive while ProviderFactory lower‑cases — and the sourceLanguage check is effectively dead code.
Two issues:
options.provider !== 'google-translate'andoptions.provider === 'google-translate'will both miss--provider Google-Translate/GOOGLE-TRANSLATE, whileProviderFactory.createProviderhappily accepts those (it doesproviderName.toLowerCase()). Result: API key validation will incorrectly fire (or be skipped) depending on case. Normalize once.options.sourceLanguageis wired inparseCliArguments(Line 209) with defaultDEFAULTS.SOURCE_LANGUAGE(which itself defaults to'en'). So!options.sourceLanguageis essentially always falsy unless someone explicitly passes--source-language ''. The Google‑Translate sourceLanguage validation never fires in practice — and the provider's ownvalidateConfigalready enforces this viaProviderFactory.createAndValidateProvider. Either drop the duplicate check here, or also strip the CLI default so the absence is detectable.
- if (!options.dryRun && !options.apiKey && !process.env.API_KEY) {
- // Google Translate doesn't require an API key
- if (options.provider !== 'google-translate') {
- errors.push('🔑 API key required (set API_KEY env var, use --api-key, or try --dry-run)');
- }
- }
-
- // Google Translate requires source language
- if (options.provider === 'google-translate' && !options.sourceLanguage && !process.env.SOURCE_LANGUAGE) {
- errors.push('🌐 Source language required for Google Translate (use -s or --source-language)');
- }
+ const provider = (options.provider || '').toLowerCase();
+ if (!options.dryRun && !options.apiKey && !process.env.API_KEY) {
+ // Google Translate doesn't require an API key.
+ if (provider !== 'google-translate') {
+ errors.push('🔑 API key required (set API_KEY env var, use --api-key, or try --dry-run)');
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/index.js` around lines 305 - 315, The provider comparisons in this
block are case-sensitive while ProviderFactory.createProvider lowercases
provider names, causing mismatches; normalize options.provider (e.g., use
options.provider && options.provider.toLowerCase()) before comparing to
'google-translate' so both the API key branch and the provider-specific branch
behave consistently with ProviderFactory.createProvider, and remove the
redundant source-language presence check for Google Translate
(options.sourceLanguage / DEFAULTS.SOURCE_LANGUAGE) since parseCliArguments
already applies a default and ProviderFactory.createAndValidateProvider enforces
required fields—either drop the kludgy validation here or change
parseCliArguments/DEFAULTS handling so absence is detectable if you prefer
keeping this check.
| estimateOutputTokens(inputTokens, targetLang) { | ||
| // Use conservative.1.4x multiplier for Gemini. | ||
| return Math.round(inputTokens * 1.4); | ||
| } |
There was a problem hiding this comment.
Typo in JSDoc and inline comment ("conservative.1.4x") and unused targetLang parameter.
The doc and code comment read "Use conservative.1.4x" — the stray period reads as a typo. Also, targetLang is in the signature/docs but unused; drop it or actually vary the multiplier per language (Russian/German often expand more than Chinese).
- /**
- * Estimates output tokens based on input tokens.
- * Uses a conservative multiplier for Gemini models.
- *
- * `@since` 1.0.0
- *
- * `@param` {number} inputTokens - Number of input tokens.
- * `@param` {string} targetLang - Target language (unused in base implementation).
- *
- * `@return` {number} Estimated output tokens.
- */
- estimateOutputTokens(inputTokens, targetLang) {
- // Use conservative.1.4x multiplier for Gemini.
- return Math.round(inputTokens * 1.4);
- }
+ /**
+ * Estimates output tokens based on input tokens.
+ *
+ * `@since` 1.0.0
+ *
+ * `@param` {number} inputTokens - Number of input tokens.
+ *
+ * `@return` {number} Estimated output tokens.
+ */
+ estimateOutputTokens(inputTokens) {
+ // Use a conservative 1.4x multiplier for Gemini.
+ return Math.round(inputTokens * 1.4);
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/gemini/GeminiProvider.js` around lines 289 - 292, Fix the typo
in the inline comment/doc for estimateOutputTokens ("conservative.1.4x" →
"conservative 1.4x") and remove the unused targetLang parameter from the
estimateOutputTokens signature and any related docs; update all callers that
pass a second argument to call estimateOutputTokens(inputTokens) instead.
Alternatively, if you prefer language-sensitive sizing, implement a small
language multiplier map inside estimateOutputTokens (e.g., {zh:1.1, en:1.4,
de:1.5, ru:1.5}) and use targetLang to pick the multiplier—ensure the comment
reflects the chosen behavior and remove the unused-parameter warning
accordingly.
| const translateItem = async (item, itemIndex) => { | ||
| let attempts = 0; | ||
|
|
||
| while (attempts <= maxRetries) { | ||
| try { | ||
| // Translate the string | ||
| const translation = await this._translateText(item.msgid, sourceLang, targetLang, timeoutMs); | ||
|
|
||
| // Handle plural forms - Google Translate doesn't support plural forms directly | ||
| // If the item has a plural form, translate it too | ||
| let msgstr = [translation]; | ||
|
|
||
| if (item.msgid_plural) { | ||
| const pluralTranslation = await this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs); | ||
| msgstr = Array(pluralCount).fill(pluralTranslation); | ||
| if (pluralCount === 1) { | ||
| msgstr = [translation]; | ||
| } | ||
| } | ||
|
|
||
| return { | ||
| msgid: item.msgid, | ||
| msgid_plural: item.msgid_plural || null, | ||
| msgstr, | ||
| }; | ||
| } catch (error) { | ||
| attempts++; | ||
|
|
||
| if (attempts > maxRetries) { | ||
| this.logger.error(`Failed to translate "${item.msgid}" after ${maxRetries + 1} attempts`); | ||
| return { | ||
| msgid: item.msgid, | ||
| msgid_plural: item.msgid_plural || null, | ||
| msgstr: Array(pluralCount).fill(''), | ||
| error: error?.message || 'Translation failed', | ||
| }; | ||
| } | ||
|
|
||
| // Short delay before retry for Google Translate | ||
| if (error.message?.includes('timeout') || error.message?.includes('429')) { | ||
| await this._sleep(500); // Short delay for retries | ||
| } | ||
|
|
||
| this.logger.warn(`Retry ${attempts}/${maxRetries} for "${item.msgid}": ${error.message}`); | ||
| } | ||
| } | ||
|
|
||
| // Should not reach here, but just in case | ||
| return { | ||
| msgid: item.msgid, | ||
| msgid_plural: item.msgid_plural || null, | ||
| msgstr: Array(pluralCount).fill(''), | ||
| error: 'Translation failed', | ||
| }; | ||
| }; |
There was a problem hiding this comment.
retryDelayMs parameter is silently ignored — retries use a hardcoded 500 ms.
The function signature accepts retryDelayMs (and the README/CLI advertise --retry-delay), but inside translateItem only a fixed await this._sleep(500) is used, and only when the error message contains "timeout" or "429". For other errors the loop retries immediately with no backoff. This breaks the retry contract users configure and can hammer the upstream service on transient errors.
- // Short delay before retry for Google Translate
- if (error.message?.includes('timeout') || error.message?.includes('429')) {
- await this._sleep(500); // Short delay for retries
- }
+ // Apply configured retry delay before next attempt (with light jitter).
+ const jitter = Math.floor(Math.random() * 200);
+ await this._sleep(retryDelayMs + jitter);Also consider exponential backoff (retryDelayMs * 2 ** (attempts-1)) for HTTP 429 specifically.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const translateItem = async (item, itemIndex) => { | |
| let attempts = 0; | |
| while (attempts <= maxRetries) { | |
| try { | |
| // Translate the string | |
| const translation = await this._translateText(item.msgid, sourceLang, targetLang, timeoutMs); | |
| // Handle plural forms - Google Translate doesn't support plural forms directly | |
| // If the item has a plural form, translate it too | |
| let msgstr = [translation]; | |
| if (item.msgid_plural) { | |
| const pluralTranslation = await this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs); | |
| msgstr = Array(pluralCount).fill(pluralTranslation); | |
| if (pluralCount === 1) { | |
| msgstr = [translation]; | |
| } | |
| } | |
| return { | |
| msgid: item.msgid, | |
| msgid_plural: item.msgid_plural || null, | |
| msgstr, | |
| }; | |
| } catch (error) { | |
| attempts++; | |
| if (attempts > maxRetries) { | |
| this.logger.error(`Failed to translate "${item.msgid}" after ${maxRetries + 1} attempts`); | |
| return { | |
| msgid: item.msgid, | |
| msgid_plural: item.msgid_plural || null, | |
| msgstr: Array(pluralCount).fill(''), | |
| error: error?.message || 'Translation failed', | |
| }; | |
| } | |
| // Short delay before retry for Google Translate | |
| if (error.message?.includes('timeout') || error.message?.includes('429')) { | |
| await this._sleep(500); // Short delay for retries | |
| } | |
| this.logger.warn(`Retry ${attempts}/${maxRetries} for "${item.msgid}": ${error.message}`); | |
| } | |
| } | |
| // Should not reach here, but just in case | |
| return { | |
| msgid: item.msgid, | |
| msgid_plural: item.msgid_plural || null, | |
| msgstr: Array(pluralCount).fill(''), | |
| error: 'Translation failed', | |
| }; | |
| }; | |
| const translateItem = async (item, itemIndex) => { | |
| let attempts = 0; | |
| while (attempts <= maxRetries) { | |
| try { | |
| // Translate the string | |
| const translation = await this._translateText(item.msgid, sourceLang, targetLang, timeoutMs); | |
| // Handle plural forms - Google Translate doesn't support plural forms directly | |
| // If the item has a plural form, translate it too | |
| let msgstr = [translation]; | |
| if (item.msgid_plural) { | |
| const pluralTranslation = await this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs); | |
| msgstr = Array(pluralCount).fill(pluralTranslation); | |
| if (pluralCount === 1) { | |
| msgstr = [translation]; | |
| } | |
| } | |
| return { | |
| msgid: item.msgid, | |
| msgid_plural: item.msgid_plural || null, | |
| msgstr, | |
| }; | |
| } catch (error) { | |
| attempts++; | |
| if (attempts > maxRetries) { | |
| this.logger.error(`Failed to translate "${item.msgid}" after ${maxRetries + 1} attempts`); | |
| return { | |
| msgid: item.msgid, | |
| msgid_plural: item.msgid_plural || null, | |
| msgstr: Array(pluralCount).fill(''), | |
| error: error?.message || 'Translation failed', | |
| }; | |
| } | |
| // Apply configured retry delay before next attempt (with light jitter). | |
| const jitter = Math.floor(Math.random() * 200); | |
| await this._sleep(retryDelayMs + jitter); | |
| this.logger.warn(`Retry ${attempts}/${maxRetries} for "${item.msgid}": ${error.message}`); | |
| } | |
| } | |
| // Should not reach here, but just in case | |
| return { | |
| msgid: item.msgid, | |
| msgid_plural: item.msgid_plural || null, | |
| msgstr: Array(pluralCount).fill(''), | |
| error: 'Translation failed', | |
| }; | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/google-translate/GoogleTranslateProvider.js` around lines 267 -
321, translateItem ignores the retryDelayMs parameter and uses a hardcoded 500ms
sleep only for timeout/429 errors, causing immediate retries for other errors;
update translateItem to use the configured retryDelayMs (passed into the
surrounding scope) and apply exponential backoff for 429s by calling
this._sleep(retryDelayMs * 2 ** (attempts-1)) (or this._sleep(retryDelayMs) for
other transient errors) instead of the fixed 500, ensure retries occur after
every failed attempt before continuing the loop, and keep existing behavior
around maxRetries, attempts, _sleep, and plural handling (msgstr, pluralCount)
intact.
| let msgstr = [translation]; | ||
|
|
||
| if (item.msgid_plural) { | ||
| const pluralTranslation = await this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs); | ||
| msgstr = Array(pluralCount).fill(pluralTranslation); | ||
| if (pluralCount === 1) { | ||
| msgstr = [translation]; | ||
| } | ||
| } |
There was a problem hiding this comment.
Plural handling drops the singular form for languages with multiple plural forms.
For pluralCount > 1, this fills every slot with the plural translation, so msgstr[0] (singular) is lost. The if (pluralCount === 1) branch then re‑uses the singular for that single slot, which means:
- Languages with 1 plural form (e.g.,
zh): singular only → OK. - English‑style 2 forms (
pluralCount=2): both slots = plural translation → singular missing. - Russian/Polish/Croatian (
pluralCount=3): all 3 slots = plural translation → singular missing, no differentiation between few/many.
The conventional layout in PO files is msgstr[0] = singular, msgstr[1..] = plural. Slot[0] should always be the singular translation:
let msgstr = [translation];
if (item.msgid_plural) {
- const pluralTranslation = await this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs);
- msgstr = Array(pluralCount).fill(pluralTranslation);
- if (pluralCount === 1) {
- msgstr = [translation];
- }
+ const pluralTranslation = await this._translateText(item.msgid_plural, sourceLang, targetLang, timeoutMs);
+ if (pluralCount <= 1) {
+ msgstr = [translation];
+ } else {
+ msgstr = [translation, ...Array(pluralCount - 1).fill(pluralTranslation)];
+ }
}Note: Google Translate genuinely doesn't return per‑form plurals, so any approach is approximate, but at least msgstr[0] === singular should always hold.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/google-translate/GoogleTranslateProvider.js` around lines 277 -
285, The current plural handling replaces the singular translation by filling
all msgstr slots with the pluralTranslation; change it so msgstr[0] remains the
singular (translation) and only msgstr[1..pluralCount-1] are populated with the
pluralTranslation obtained via this._translateText(item.msgid_plural,
sourceLang, targetLang, timeoutMs). Locate the block using variables msgstr,
item.msgid_plural, pluralCount and the call to _translateText and update the
logic to initialize msgstr = [translation] and, when pluralCount > 1, set only
the remaining slots (indices 1..pluralCount-1) to the pluralTranslation rather
than overwriting msgstr[0].
| _shouldStopRetrying(error) { | ||
| // Don't retry on client errors | ||
| if (error.message && error.message.includes('timeout')) { | ||
| return false; // Timeouts are retriable | ||
| } | ||
| return false; // Most errors are retriable for Google Translate | ||
| } |
There was a problem hiding this comment.
_shouldStopRetrying is unreachable dead code — and it always returns false anyway.
The retry loop in _makeApiCallWithRetries (Lines 270–312) implements its own retry control and never calls _shouldStopRetrying. The only reference is from the unit test. As written, both branches return false, so even tests that pass an HTTP 4xx error would conclude "retry forever". Either:
- Wire this into the retry loop and meaningfully classify errors (auth/permission errors should stop retrying), or
- Remove the method to avoid implying behavior that doesn't exist.
_shouldStopRetrying(error) {
- // Don't retry on client errors
- if (error.message && error.message.includes('timeout')) {
- return false; // Timeouts are retriable
- }
- return false; // Most errors are retriable for Google Translate
+ const msg = error?.message || '';
+ // Auth/permission errors won't be fixed by a retry.
+ if (/HTTP 40[13]\b/.test(msg)) return true;
+ return false;
}And call it from the catch block in the loop above.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _shouldStopRetrying(error) { | |
| // Don't retry on client errors | |
| if (error.message && error.message.includes('timeout')) { | |
| return false; // Timeouts are retriable | |
| } | |
| return false; // Most errors are retriable for Google Translate | |
| } | |
| _shouldStopRetrying(error) { | |
| const msg = error?.message || ''; | |
| // Auth/permission errors won't be fixed by a retry. | |
| if (/HTTP 40[13]\b/.test(msg)) return true; | |
| return false; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/google-translate/GoogleTranslateProvider.js` around lines 460 -
466, The _shouldStopRetrying helper is dead code and always returns false;
update it to classify errors (return true for non-retriable client errors like
HTTP 4xx auth/permission errors and false for retriable transient errors like
timeouts/5xx), then call it from the catch block inside _makeApiCallWithRetries
so the retry loop breaks and rethrows when _shouldStopRetrying(error) is true;
specifically modify _shouldStopRetrying to inspect error.status / error.code /
error.message for 4xx vs 5xx/timeout and add a call to
_shouldStopRetrying(error) in the catch to stop retrying and throw the original
error when it signals non-retriable.
| const url = provider._buildUrl('Hello World', 'en', 'fr'); | ||
|
|
||
| expect(url).toBe('https://translate.googleapis.com/translate_a/single?client=gtx&sl=en&tl=fr&dt=t&q=Hello%20World'); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
node -e "console.log(new URLSearchParams({ q: 'Hello World' }).toString())"Repository: GravityKit/Potomatic
Length of output: 78
🏁 Script executed:
fd GoogleTranslateProvider -t jsRepository: GravityKit/Potomatic
Length of output: 234
🏁 Script executed:
rg "_buildUrl" -t js -A 10Repository: GravityKit/Potomatic
Length of output: 4872
🏁 Script executed:
cat -n tests/integration/google-translate.test.js | sed -n '130,145p'Repository: GravityKit/Potomatic
Length of output: 838
Hard‑coded URL assertion will fail: URLSearchParams encodes space as +, not %20.
GoogleTranslateProvider._buildUrl uses new URLSearchParams({ ..., q: text }) and .toString(). URLSearchParams.toString() encodes spaces as + (per WHATWG URL spec — the form‑urlencoded serializer). So for q='Hello World' the produced URL will end with q=Hello+World, not q=Hello%20World, and this exact‑match assertion fails.
Suggested fix
- const url = provider._buildUrl('Hello World', 'en', 'fr');
-
- expect(url).toBe('https://translate.googleapis.com/translate_a/single?client=gtx&sl=en&tl=fr&dt=t&q=Hello%20World');
+ const url = provider._buildUrl('Hello World', 'en', 'fr');
+
+ expect(url).toBe('https://translate.googleapis.com/translate_a/single?client=gtx&sl=en&tl=fr&dt=t&q=Hello+World');Alternatively, parse the URL with new URL(url) and assert on searchParams.get('q') so the test isn't coupled to the serializer's encoding choice.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const url = provider._buildUrl('Hello World', 'en', 'fr'); | |
| expect(url).toBe('https://translate.googleapis.com/translate_a/single?client=gtx&sl=en&tl=fr&dt=t&q=Hello%20World'); | |
| }); | |
| const url = provider._buildUrl('Hello World', 'en', 'fr'); | |
| expect(url).toBe('https://translate.googleapis.com/translate_a/single?client=gtx&sl=en&tl=fr&dt=t&q=Hello+World'); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/google-translate.test.js` around lines 137 - 140, The test
currently asserts a hard-coded URL string which fails because
GoogleTranslateProvider._buildUrl uses URLSearchParams and encodes spaces as
'+'; update the test in google-translate.test.js to avoid exact serialized
encoding by either (a) changing the expected query portion to use
'q=Hello+World' to match URLSearchParams, or preferably (b) parse the returned
URL with new URL(url) and assert URL.searchParams.get('q') === 'Hello World'
(and separately assert sl= 'en' and tl= 'fr' if desired) so the test isn't tied
to the serializer's space-encoding choice.
| it('should handle array of segments', () => { | ||
| // Multiple segments in response | ||
| const mockData = [[['Hello', 'world']]]; | ||
| const result = provider._parseResponse(mockData); | ||
| expect(result).toBe('Helloworld'); | ||
| }); |
There was a problem hiding this comment.
Test expectation does not match the provider's implementation — _parseResponse([[['Hello', 'world']]]) returns 'Hello', not 'Helloworld'.
Trace through _parseResponse(data = [[['Hello', 'world']]]):
data[0]=[['Hello', 'world']](one outer array containing one inner array)firstElement.map(seg => Array.isArray(seg) ? seg[0] : seg)→['Hello'].join('')→'Hello'
So this test will fail. If you intended to validate that the parser handles the actual Google Translate response shape [[['translated', 'original', null, null, ...], ['next-segment', ...]], ...], use that shape instead:
- it('should handle array of segments', () => {
- // Multiple segments in response
- const mockData = [[['Hello', 'world']]];
- const result = provider._parseResponse(mockData);
- expect(result).toBe('Helloworld');
- });
+ it('should handle array of segments', () => {
+ // Google Translate splits long inputs into segments.
+ const mockData = [[['Hello ', 'orig1'], ['world', 'orig2']]];
+ const result = provider._parseResponse(mockData);
+ expect(result).toBe('Hello world');
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/googleTranslateProvider.test.js` around lines 129 - 134, The test
expecting 'Helloworld' uses the wrong mock shape; update the test for the
provider._parseResponse function so the mocked response matches Google
Translate's segment format (each segment is an array whose first element is the
translated text). For example, replace the mockData [[['Hello','world']]] with
an array of segments like [[['Hello','orig'], ['world','orig2']]] and assert
result === 'Helloworld', or alternatively keep the original mock and change the
expectation to 'Hello' to match current implementation; adjust the it(...)
description accordingly.
Summary
gemini-pricing.jsonto/configgemini-2.5-pro,gemini-2.5-flashmodelPurpose
Enables the project to support Gemini AI alongside existing models like GPT. Useful for teams comparing costs or switching LLM providers.
Let me know if any refactoring or renaming is needed!
Summary by CodeRabbit
New Features
Documentation
Chores
Tests