-
Notifications
You must be signed in to change notification settings - Fork 441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solution #418
base: main
Are you sure you want to change the base?
Solution #418
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job on implementing the project! 🎉 Your code meets the task requirements and is well-structured. The database setup, models, and CRUD operations are correctly implemented, and the FastAPI application is functional. While there are a few suggestions for improvement, such as adding pagination to the read_books
and read_authors
endpoints and ensuring the author_id
in the Book
model is non-nullable, these are minor and do not impact the core functionality. Keep up the good work and consider these enhancements for future iterations! 😊
✨ Please rate this review in the Platform's notification center. I will be grateful for your feedback! ✨
op.create_table('author', | ||
sa.Column('id', sa.Integer(), nullable=False), | ||
sa.Column('name', sa.String(length=255), nullable=True), | ||
sa.Column('bio', sa.String(length=255), nullable=False), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bio
column in the author
table is marked as nullable=False
, which means it cannot be empty. Ensure this is intentional, as it requires every author to have a bio.
sa.Column('title', sa.String(length=255), nullable=False), | ||
sa.Column('summary', sa.String(length=255), nullable=False), | ||
sa.Column('publication_date', sa.Date(), nullable=False), | ||
sa.Column('author_id', sa.Integer(), nullable=True), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The author_id
column in the book
table is nullable. If every book must have an author, consider setting nullable=False
.
__tablename__ = 'author' | ||
id = Column(Integer, primary_key=True, index=True) | ||
name = Column(String(255), unique=True) | ||
bio = Column(String(255), nullable=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bio
column in the Author
model is marked as nullable=False
, which means it cannot be empty. Ensure this is intentional, as it requires every author to have a bio.
title = Column(String(255), nullable=False) | ||
summary = Column(String(255), nullable=False) | ||
publication_date = Column(Date, nullable=False) | ||
author_id = Column(Integer, ForeignKey('author.id')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The author_id
column in the Book
model is nullable. If every book must have an author, consider setting nullable=False
.
db.close() | ||
|
||
|
||
@app.get("/authors/", response_model=List[schemas.Author]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding pagination parameters (e.g., skip
, limit
) to the read_authors
endpoint to handle large datasets efficiently, as suggested in the task requirements.
return crud.create_author(db=db, author=author) | ||
|
||
|
||
@app.get("/books/", response_model=List[schemas.Book]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding pagination parameters (e.g., skip
, limit
) to the read_books
endpoint to handle large datasets efficiently, as suggested in the task requirements.
No description provided.