Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance with AWS CDK #6986

Open
rchiodo opened this issue Feb 25, 2025 · 3 comments
Open

Performance with AWS CDK #6986

rchiodo opened this issue Feb 25, 2025 · 3 comments
Assignees
Labels
needs repro Issue has not been reproduced yet

Comments

@rchiodo
Copy link
Contributor

rchiodo commented Feb 25, 2025

Discussed in #6984

Originally posted by arjentraas February 24, 2025
Developing Python in VS Code with Windows Subsystem for Linux (WSL) is the best Python development experience for me so far.

The only exception is writing infrastructure as code using the Cloud Development Kit (CDK) from Amazon Web Services (AWS). Pylance is struggling a lot to load type hints, auto imports and syntax highlighting. It may take up to 10 seconds to load that kind of stuff. It even crashes if I write to much code in a short time window.

Switching from WSL to Windows helps a bit. Pylance speeds up a little, while other things slow down, so I end up accepting the CDK performance hit and reverting to WSL.

These log entries show up a lot:

[BG(1)] Long operation: analyzing: file:///home/user/sources/my_application/stacks/my_stack.py (2541ms)

Anyone have the same experience? Or even better, a solution?

@github-actions github-actions bot added the needs repro Issue has not been reproduced yet label Feb 25, 2025
@rchiodo
Copy link
Contributor Author

rchiodo commented Feb 25, 2025

Here's some example code that reproduces the issue:

from aws_cdk import Stack, aws_lambda as _lambda
from aws_cdk import aws_sqs as sqs
from aws_cdk import aws_lambda_event_sources as eventsources
from aws_cdk import aws_iam as iam
from aws_cdk import aws_ec2 as ec2
from aws_cdk import aws_ecr as ecr
from aws_cdk import aws_rds as rds
from constructs import Construct

class LambdaSqsStack(Stack):
    def __init__(self, scope: Construct, id: str, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # Create a VPC with public subnets
        vpc = ec2.Vpc(
            self, "MyVPC",
            max_azs=2,
            subnet_configuration=[
                ec2.SubnetConfiguration(
                    name="PublicSubnet",
                    subnet_type=ec2.SubnetType.PUBLIC
                )
            ]
        )

        # Create an ECR repository for storing the Lambda container image
        ecr_repository = ecr.Repository(self, "MyLambdaRepository")

        # Create an SQS queue
        queue = sqs.Queue(self, "MyQueue")

        # Create an IAM role for the Lambda function
        lambda_role = iam.Role(
            self, "LambdaExecutionRole",
            assumed_by=iam.ServicePrincipal("lambda.amazonaws.com"),
            managed_policies=[
                iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AWSLambdaBasicExecutionRole"),
                iam.ManagedPolicy.from_aws_managed_policy_name("AmazonSQSFullAccess"),
                iam.ManagedPolicy.from_aws_managed_policy_name("AmazonRDSFullAccess")
            ]
        )

        # Create an Aurora database
        db_cluster = rds.DatabaseCluster(
            self, "MyAuroraCluster",
            engine=rds.DatabaseClusterEngine.AURORA_MYSQL(),
            instances=1,
            vpc=vpc,
            credentials=rds.Credentials.from_generated_secret("admin"),
            default_database_name="mydatabase"
        )

        # Create a Lambda function that processes messages from the queue using a container image
        lambda_function = _lambda.DockerImageFunction(
            self, "MyLambda",
            code=_lambda.DockerImageCode.from_ecr(ecr_repository),
            environment={
                "QUEUE_URL": queue.queue_url
            },
            role=lambda_role,
            vpc=vpc
        )

        # Grant the Lambda function permissions to read messages from the queue
        queue.grant_consume_messages(lambda_function)

        # Configure the Lambda function to be triggered by the SQS queue
        lambda_function.add_event_source(eventsources.SqsEventSource(queue))

        # Create a Lambda function that publishes messages to the queue and accesses the Aurora database
        publisher_lambda = _lambda.Function(
            self, "PublisherLambda",
            runtime=_lambda.Runtime.PYTHON_3_9,
            handler="publisher_lambda.handler",
            code=_lambda.Code.from_asset("lambda"),
            environment={
                "QUEUE_URL": queue.queue_url,
                "DB_ENDPOINT": db_cluster.cluster_endpoint.hostname
            },
            role=lambda_role,
            vpc=vpc
        )

        # Grant the publisher Lambda permissions to send messages to the queue and access the database
        queue.grant_send_messages(publisher_lambda)
        db_cluster.grant_connect(publisher_lambda)

app = core.App()
LambdaSqsStack(app, "LambdaSqsStack")
app.synth()

Hover over _lambda.Function. Analysis will take 3 seconds for me.

@debonte
Copy link
Contributor

debonte commented Feb 28, 2025

Is this the issue that @heejaechang was addressing with microsoft/pyright#9993?

@rchiodo
Copy link
Contributor Author

rchiodo commented Feb 28, 2025

No, I don't believe so. This is not a memory problem but rather a performance issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs repro Issue has not been reproduced yet
Projects
None yet
Development

No branches or pull requests

2 participants