Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add log population script #2885

Merged
merged 2 commits into from
Feb 10, 2025
Merged

docs: add log population script #2885

merged 2 commits into from
Feb 10, 2025

Conversation

ogzhanolguncu
Copy link
Contributor

@ogzhanolguncu ogzhanolguncu commented Feb 10, 2025

What does this PR do?

Fixes # (issue)

If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Chore (refactoring code, technical debt, workflow improvements)
  • Enhancement (small improvements)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How should this be tested?

  • Test A
  • Test B

Checklist

Required

  • Filled out the "How to test" section in this PR
  • Read Contributing Guide
  • Self-reviewed my own code
  • Commented on my code in hard-to-understand areas
  • Ran pnpm build
  • Ran pnpm fmt
  • Checked for warnings, there are none
  • Removed all console.logs
  • Merged the latest changes from main onto my branch with git pull origin main
  • My changes don't cause any responsiveness issues

Appreciated

  • If a UI change was made: Added a screen recording or screenshots to this PR
  • Updated the Unkey Docs if changes were necessary

Summary by CodeRabbit

  • New Features
    • Expanded the contributing documentation with an additional guide titled "Populating Logs," detailing how to test API functionality through enhanced logging and rate limit verification. This update enriches the content available to users in the contributing section, offering practical instructions for troubleshooting and verifying API responses.
    • Added a new page entry to the contributing section, increasing the overall documentation available to users.

Copy link

vercel bot commented Feb 10, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
engineering ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 10, 2025 5:17pm
play ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 10, 2025 5:17pm
www ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 10, 2025 5:17pm
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
dashboard ⬜️ Ignored (Inspect) Visit Preview Feb 10, 2025 5:17pm

Copy link

changeset-bot bot commented Feb 10, 2025

⚠️ No Changeset found

Latest commit: 41b12a9

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

coderabbitai bot commented Feb 10, 2025

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This pull request updates the configuration for the Contributing section by modifying the JSON file to add a new page entry, "populating-logs". It also introduces a new Markdown file that provides a script for testing API functionality. The script includes various logging functions, API call triggers, and mechanisms for handling rate limits during testing.

Changes

File(s) Summary
apps/engineering/…/meta.json Updated the "pages" array by adding the "populating-logs" entry.
apps/engineering/…/populating-logs.mdx Created a new Markdown file with a script for testing API endpoints, logging functions, and rate limits.

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Developer
    participant Script as Logging Script
    participant API as API Service

    Dev->>Script: Execute script with API key
    Script->>Script: Initialize logging configurations
    Script->>API: Send API call (e.g., success, warning, error)
    API-->>Script: Return response status and data
    Script->>Dev: Output log messages based on response
Loading

Possibly related PRs

  • docs: checks for pull requests #2884: The changes in the main PR are related to the addition of a new entry in the "pages" array of the same meta.json file, which is also modified in the retrieved PR to include a different new entry.

Suggested Reviewers

  • mcstepp
  • chronark
  • MichaelUnkey
  • perkinsjr

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 48df208 and 41b12a9.

📒 Files selected for processing (1)
  • apps/engineering/content/contributing/meta.json (1 hunks)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

github-actions bot commented Feb 10, 2025

Thank you for following the naming conventions for pull request titles! 🙏

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
apps/engineering/content/contributing/populating-logs.mdx (5)

5-9: Introduction and Callout Section

The introductory text and callout provide a basic context for the script. For enhanced clarity, consider expanding on prerequisites (like where to obtain an API key or how to enable rate limits) and the expected outcome of running the script.


11-14: Logs Section Heading and Instructions

The "### Logs" section is clearly marked, and the instruction to ensure an API key is provided is helpful. Providing a brief note on where or how to obtain the key could further assist users.


65-74: API Key Validation and Argument Handling

The script validates that an API key is provided by checking the number of command-line arguments. Note that while API_KEY is obtained from the first argument, ROOT_KEY is set from the second argument without explicit validation. Consider adding a check or a default behavior for ROOT_KEY if it’s required for subsequent API operations.


144-184: API Call Function with Randomized Request Distribution

The make_api_call() function randomly selects between triggering an error, a warning, or making a regular API call based on probability. This structure is useful for testing multiple response scenarios. For more granular error detection, consider explicitly evaluating HTTP status codes from the API response.


248-358: Ratelimit Logs Script Evaluation

The "Ratelimit Logs" section embeds a separate shell script that:

  • Processes command-line arguments to extract an API key.
  • Defines a set of test identifiers.
  • Iterates over these identifiers to execute multiple API calls while logging the results.

The code is clear and functional for local testing purposes. A couple of points to consider:

  • The endpoint URL (http://localhost:8787/v1/ratelimits.limit) is hard-coded; parameterizing it may increase reusability.
  • More robust error handling could be implemented to manage unexpected API responses in a production-like scenario.
    Overall, this script serves its temporary testing purpose well.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 50b22a6 and 48df208.

📒 Files selected for processing (2)
  • apps/engineering/content/contributing/meta.json (1 hunks)
  • apps/engineering/content/contributing/populating-logs.mdx (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (16)
  • GitHub Check: Test Packages / Test ./packages/rbac
  • GitHub Check: Test Packages / Test ./packages/hono
  • GitHub Check: Test Packages / Test ./packages/cache
  • GitHub Check: Test Packages / Test ./packages/api
  • GitHub Check: Test Packages / Test ./internal/clickhouse
  • GitHub Check: Test Packages / Test ./internal/resend
  • GitHub Check: Test Packages / Test ./internal/keys
  • GitHub Check: Test Packages / Test ./internal/id
  • GitHub Check: Test Packages / Test ./internal/hash
  • GitHub Check: Test Packages / Test ./internal/encryption
  • GitHub Check: Test Packages / Test ./internal/billing
  • GitHub Check: Build / Build
  • GitHub Check: Test API / API Test Local
  • GitHub Check: Test Agent Local / test_agent_local
  • GitHub Check: autofix
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (8)
apps/engineering/content/contributing/meta.json (1)

6-13: New Page Entry Addition

The "pages" array now correctly includes the new entry "populating-logs". Please confirm that the corresponding Markdown file (populating-logs.mdx) exists and is referenced correctly within the documentation hierarchy.

apps/engineering/content/contributing/populating-logs.mdx (7)

1-3: YAML Front Matter Validation

The YAML front matter sets the document title to "Populating Logs" correctly. Consider adding additional metadata (e.g., date, author) if required by project guidelines.


15-64: Robust Logging Functions with Terminal Color Support

The code block effectively checks for terminal color support and defines several logging functions (header, success, error, warning, info, debug). This improves the readability of script output in supported terminals.


76-83: Definition of API Endpoints and Rate Limit Settings

The API endpoints and rate limit parameters are clearly defined and use "localhost" for testing. If this script ever moves beyond local testing, consider making these endpoints configurable via environment variables or additional command-line parameters.


85-102: Random Huge Number Generator Function

The generate_huge_number() function creates a random number with a length between 50 and 100 digits. This is appropriate for testing boundary conditions. (Note: Using the bash built-in $RANDOM is acceptable for this test context, though it is not cryptographically secure.)


104-126: Function to Trigger 500 Error

The trigger_500_error() function constructs a payload with extremely large numeric values and sends it to an endpoint ($RATELIMIT_ENDPOINT.limit) intended to trigger a server error. Please verify that appending .limit to the base endpoint is intentional and consistent with your backend API design.


128-142: Function to Trigger Warning (400 Error)

This function simulates a warning scenario by performing a GET request with an invalid namespace ID. As with the 500 error function, verify that appending .listOverrides to $RATELIMIT_ENDPOINT produces the expected API behavior.


197-245: Infinite Loop for API Request Execution

The infinite loop efficiently handles API request bursts, interval tracking, and periodic rate limit reconfiguration. Be sure that the sleep intervals (request_delay and the 1-second pauses) and burst parameters are tuned to suit your testing environment, so as not to overwhelm the API server.

Comment on lines +186 to +194
# Script initialization
log_header "API Testing Script Initialization"
log_info "API Key: ${API_KEY:0:4}...${API_KEY: -4}"
log_info "Rate Limit: $RATE_LIMIT_TOKENS requests per $RATE_LIMIT_INTERVAL ms"
log_info "Total Usage Limit: $TOTAL_USES_LIMIT requests"

# Initial rate limit configuration
log_header "Initial Setup"
set_rate_limit
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Script Initialization and Rate Limit Setup

The script initialization logs useful information (truncated API key, rate limit configuration) and calls set_rate_limit to establish the initial configuration. However, the function set_rate_limit is invoked (at lines 194 and later on line 208) without being defined or imported in this file. Please ensure that set_rate_limit is properly defined to avoid runtime errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants