Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: POST to integrations settings API on onboarding completion #127

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

JustARatherRidiculouslyLongUsername
Copy link
Contributor

@JustARatherRidiculouslyLongUsername JustARatherRidiculouslyLongUsername commented Jan 31, 2025

(cherry picked from commit 8f9fe42)

Clickup

https://app.clickup.com/t/86cxm0f86

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced API request handling to support HTTP 201 status code.
    • Added integration settings posting functionality for workspaces upon completing onboarding.
  • Configuration Updates

    • Introduced new environment variable for integration settings API.
  • Improvements

    • Updated workspace onboarding workflow to include integration settings posting.

These changes improve API interaction robustness and extend workspace configuration capabilities.

Copy link

coderabbitai bot commented Jan 31, 2025

Walkthrough

This pull request introduces changes across multiple files to enhance integration settings and request handling. The modifications include expanding HTTP status code handling in a helper function, adding a new function to post integration settings for workspaces, and introducing new environment variable configurations. The changes appear to be part of a broader effort to improve integration workflows and configuration management.

Changes

File Change Summary
apps/fyle/helpers.py Modified post_request function to accept both 200 and 201 HTTP status codes as successful responses
apps/workspaces/serializers.py Added call to post_to_integration_settings() in create method of AdvancedSettingSerializer
apps/workspaces/tasks.py Added new post_to_integration_settings() function to post integration settings for a workspace
quickbooks_desktop_api/settings.py Added new environment variable INTEGRATIONS_SETTINGS_API
quickbooks_desktop_api/tests/settings.py Added environment variable retrieval for INTEGRATIONS_SETTINGS_API

Possibly related PRs

Suggested labels

deploy, size/L

Suggested reviewers

  • ruuushhh
  • ashwin1111

Poem

🐰 Integrations dance and sway,
Code hops with a brand new way!
Status codes now embrace success,
Settings flow with gentle finesse,
A rabbit's leap of API delight! 🚀

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the size/S Small PR label Jan 31, 2025
Copy link

Tests Skipped Failures Errors Time
67 0 💤 0 ❌ 0 🔥 9.394s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
quickbooks_desktop_api/tests/settings.py (1)

249-249: Consider adding a default value for the environment variable.

To prevent potential None values in case the environment variable is not set, consider providing a default value.

-INTEGRATIONS_SETTINGS_API = os.environ.get('INTEGRATIONS_SETTINGS_API')
+INTEGRATIONS_SETTINGS_API = os.environ.get('INTEGRATIONS_SETTINGS_API', 'http://localhost:8000/api/v1')
apps/workspaces/tasks.py (1)

186-186: Use ISO format method instead of strftime.

Replace the hardcoded datetime format string with Python's built-in ISO format method.

-        'connected_at': datetime.now().strftime('%Y-%m-%dT%H:%M:%S.%fZ')
+        'connected_at': datetime.now(timezone.utc).isoformat()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2da725 and a8fbc6d.

📒 Files selected for processing (5)
  • apps/fyle/helpers.py (1 hunks)
  • apps/workspaces/serializers.py (2 hunks)
  • apps/workspaces/tasks.py (1 hunks)
  • quickbooks_desktop_api/settings.py (1 hunks)
  • quickbooks_desktop_api/tests/settings.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: pytest
🔇 Additional comments (3)
apps/fyle/helpers.py (1)

31-31: LGTM! Enhanced status code handling.

The addition of HTTP 201 status code handling aligns with REST API standards for resource creation responses.

apps/workspaces/serializers.py (1)

229-229: LGTM! Well-placed integration call.

The integration settings update is correctly placed after the workspace onboarding is complete.

quickbooks_desktop_api/settings.py (1)

271-271: Verify environment variable configuration.

The new environment variable INTEGRATIONS_SETTINGS_API is added without a default value. This could cause issues if the variable is not set in the environment.

Run the following script to check if this variable is properly configured in deployment files and documentation:

Consider adding a default value or documenting that this is a required environment variable:

-INTEGRATIONS_SETTINGS_API = os.environ.get('INTEGRATIONS_SETTINGS_API')
+# Required for integration settings functionality
+INTEGRATIONS_SETTINGS_API = os.environ.get('INTEGRATIONS_SETTINGS_API', None)
+if not INTEGRATIONS_SETTINGS_API:
+    raise ValueError('INTEGRATIONS_SETTINGS_API environment variable is required')

Comment on lines 189 to 195
try:
post_request(url, json.dumps(payload), refresh_token)
org_id = Workspace.objects.get(id=workspace_id).org_id
logger.info(f'New integration record: Fyle Quickbooks Desktop (IIF) Integration (ACCOUNTING) | {workspace_id = } | {org_id = }')

except Exception as error:
logger.error(error)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling with specific exceptions.

The current error handling is too broad. Consider catching specific exceptions and providing more context in the error message.

     try:
         post_request(url, json.dumps(payload), refresh_token)
         org_id = Workspace.objects.get(id=workspace_id).org_id
         logger.info(f'New integration record: Fyle Quickbooks Desktop (IIF) Integration (ACCOUNTING) | {workspace_id = } | {org_id = }')
-    except Exception as error:
-        logger.error(error)
+    except requests.RequestException as error:
+        logger.error(f"Failed to post to integration settings API: {error}")
+        raise
+    except Workspace.DoesNotExist:
+        logger.error(f"Workspace not found with id: {workspace_id}")
+        raise
+    except Exception as error:
+        logger.error(f"Unexpected error while posting to integration settings: {error}")
+        raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
post_request(url, json.dumps(payload), refresh_token)
org_id = Workspace.objects.get(id=workspace_id).org_id
logger.info(f'New integration record: Fyle Quickbooks Desktop (IIF) Integration (ACCOUNTING) | {workspace_id = } | {org_id = }')
except Exception as error:
logger.error(error)
try:
post_request(url, json.dumps(payload), refresh_token)
org_id = Workspace.objects.get(id=workspace_id).org_id
logger.info(f'New integration record: Fyle Quickbooks Desktop (IIF) Integration (ACCOUNTING) | {workspace_id = } | {org_id = }')
except requests.RequestException as error:
logger.error(f"Failed to post to integration settings API: {error}")
raise
except Workspace.DoesNotExist:
logger.error(f"Workspace not found with id: {workspace_id}")
raise
except Exception as error:
logger.error(f"Unexpected error while posting to integration settings: {error}")
raise

Comment on lines +175 to +178
def post_to_integration_settings(workspace_id: int, active: bool):
"""
Post to integration settings
"""
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add input validation for workspace_id.

The function should validate that the workspace exists before proceeding.

 def post_to_integration_settings(workspace_id: int, active: bool):
     """
     Post to integration settings
     """
+    workspace = Workspace.objects.get(id=workspace_id)
+    if not workspace:
+        raise ValueError(f"Workspace not found with id: {workspace_id}")

Committable suggestion skipped: line range outside the PR's diff.

* feat: backfill integration records

(cherry picked from commit f84275d)

* fix: add error handling and fail count to output

* fix: remove `json.dumps` from `post_request` call

This behaviour was changed in #125
@github-actions github-actions bot added size/M Medium PR and removed size/S Small PR labels Feb 3, 2025
Copy link

github-actions bot commented Feb 3, 2025

Tests Skipped Failures Errors Time
67 0 💤 0 ❌ 0 🔥 8.949s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
scripts/python/003_create_integration_records.py (3)

1-6: Enhance logging configuration and add documentation.

Consider improving the script's setup:

  1. Add a docstring explaining the script's purpose and usage
  2. Use proper logging configuration with file handler
+"""
+Backfill script to process workspaces with completed onboarding state
+and post their integration settings.
+
+Usage:
+    python scripts/python/003_create_integration_records.py
+"""
 import logging
 from apps.workspaces.models import Workspace
 from apps.workspaces.tasks import post_to_integration_settings
 
-logger = logging.getLogger(__name__)
-logger.level = logging.INFO
+logging.basicConfig(
+    level=logging.INFO,
+    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
+    handlers=[
+        logging.FileHandler('integration_records_backfill.log'),
+        logging.StreamHandler()
+    ]
+)
+logger = logging.getLogger(__name__)

8-8: LGTM! Consider adding a comment for clarity.

The counter initialization is correct, but could benefit from a brief comment explaining their purpose.

+# Counters to track processing results
 processed = failed = 0

23-28: Enhance summary logging with timing information.

Consider adding execution duration and structuring the summary for better readability.

+import time
+
+start_time = time.time()
 # ... processing ...
+duration = time.time() - start_time
 logger.info(
-f"""
-Completed backfill. Total: {workspaces.count()}
-Processed: {processed}, Failed: {failed}
-"""
+    "Backfill Summary:\n"
+    f"Total workspaces: {total_count}\n"
+    f"Successfully processed: {processed}\n"
+    f"Failed: {failed}\n"
+    f"Duration: {duration:.2f} seconds"
 )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8fbc6d and 0393724.

📒 Files selected for processing (2)
  • apps/workspaces/tasks.py (1 hunks)
  • scripts/python/003_create_integration_records.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/workspaces/tasks.py
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: pytest
🔇 Additional comments (1)
scripts/python/003_create_integration_records.py (1)

3-3: Verify integration settings API endpoint configuration.

Ensure that the integration settings API endpoint is properly configured and accessible.

✅ Verification successful

Integration Settings API Endpoint Verified.

  • The function post_to_integration_settings is defined in apps/workspaces/tasks.py and correctly utilizes settings.INTEGRATIONS_SETTINGS_API.
  • The INTEGRATIONS_SETTINGS_API variable is set in quickbooks_desktop_api/settings.py (and in tests), ensuring that the API endpoint is configured via an environment variable.
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the integration settings API endpoint configuration

# Search for the API endpoint definition
rg -l "post_to_integration_settings"

# Check for any environment variables or configuration related to the integration settings API
rg -l "INTEGRATION_SETTINGS.*URL|INTEGRATION.*API.*URL" --type ini --type env

Length of output: 249


Script:

#!/bin/bash
# Verify the integration settings API endpoint configuration by checking its definition and related settings.

# 1. Check for the function definition in apps/workspaces/tasks.py (with some context lines)
echo "Searching for the 'post_to_integration_settings' function definition in apps/workspaces/tasks.py:"
rg -A 10 "def post_to_integration_settings" apps/workspaces/tasks.py

echo ""
# 2. Search the entire repository for integration configuration environment variables without file-type restrictions
echo "Searching for integration settings configuration (environment variables or URLs):"
rg "INTEGRATION_SETTINGS.*URL|INTEGRATION.*API.*URL" .

Length of output: 981


Script:

#!/bin/bash
# Description: Further verify the integration settings API endpoint configuration by checking
# for the "INTEGRATIONS_SETTINGS_API" variable referenced in the tasks file within the settings.

echo "Searching for the 'INTEGRATIONS_SETTINGS_API' setting in the repository:"
rg "INTEGRATIONS_SETTINGS_API" .

Length of output: 516

Comment on lines +10 to +21
workspaces = Workspace.objects.filter(onboarding_state='COMPLETE')
for workspace in workspaces:
try:
logger.info(f"Processing workspace: {workspace.id} | {workspace.name}")
post_to_integration_settings(workspace.id, True)
processed += 1
except Exception as e:
failed += 1
logger.error(
f"Failed to process workspace {workspace.id}: {str(e)}",
exc_info=True
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance robustness and performance of workspace processing.

The current implementation could benefit from several improvements:

  1. Batch processing to handle large datasets efficiently
  2. Progress tracking for long-running operations
  3. Retry mechanism for transient failures
-workspaces = Workspace.objects.filter(onboarding_state='COMPLETE')
-for workspace in workspaces:
+BATCH_SIZE = 100
+total_count = Workspace.objects.filter(onboarding_state='COMPLETE').count()
+for offset in range(0, total_count, BATCH_SIZE):
+    workspaces = Workspace.objects.filter(
+        onboarding_state='COMPLETE'
+    )[offset:offset + BATCH_SIZE]
+    logger.info(f"Processing batch {offset//BATCH_SIZE + 1} of {(total_count + BATCH_SIZE - 1)//BATCH_SIZE}")
+    for workspace in workspaces:
+        retries = 3
+        while retries > 0:
             try:
                 logger.info(f"Processing workspace: {workspace.id} | {workspace.name}")
                 post_to_integration_settings(workspace.id, True)
                 processed += 1
+                break
             except Exception as e:
-                failed += 1
-                logger.error(
-                    f"Failed to process workspace {workspace.id}: {str(e)}",
-                    exc_info=True
-                )
+                retries -= 1
+                if retries == 0:
+                    failed += 1
+                    logger.error(
+                        f"Failed to process workspace {workspace.id} after 3 attempts: {str(e)}",
+                        exc_info=True
+                    )
+                else:
+                    logger.warning(
+                        f"Retry {3-retries}/3 for workspace {workspace.id}: {str(e)}"
+                    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
workspaces = Workspace.objects.filter(onboarding_state='COMPLETE')
for workspace in workspaces:
try:
logger.info(f"Processing workspace: {workspace.id} | {workspace.name}")
post_to_integration_settings(workspace.id, True)
processed += 1
except Exception as e:
failed += 1
logger.error(
f"Failed to process workspace {workspace.id}: {str(e)}",
exc_info=True
)
BATCH_SIZE = 100
total_count = Workspace.objects.filter(onboarding_state='COMPLETE').count()
for offset in range(0, total_count, BATCH_SIZE):
workspaces = Workspace.objects.filter(
onboarding_state='COMPLETE'
)[offset:offset + BATCH_SIZE]
logger.info(f"Processing batch {offset//BATCH_SIZE + 1} of {(total_count + BATCH_SIZE - 1)//BATCH_SIZE}")
for workspace in workspaces:
retries = 3
while retries > 0:
try:
logger.info(f"Processing workspace: {workspace.id} | {workspace.name}")
post_to_integration_settings(workspace.id, True)
processed += 1
break
except Exception as e:
retries -= 1
if retries == 0:
failed += 1
logger.error(
f"Failed to process workspace {workspace.id} after 3 attempts: {str(e)}",
exc_info=True
)
else:
logger.warning(
f"Retry {3-retries}/3 for workspace {workspace.id}: {str(e)}"
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/M Medium PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants