Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

App/Language: Add a LLM example about text generation #114

Merged
merged 1 commit into from
Oct 4, 2024

Conversation

niley7464
Copy link
Contributor

This patch adds a draft of the LLM example.
It uses llama2 model to generate text using input prompt.

@niley7464 niley7464 marked this pull request as ready for review October 4, 2024 04:36
@niley7464 niley7464 changed the title [After #113] App/Language: Add a LLM example about text generation App/Language: Add a LLM example about text generation Oct 4, 2024
}?.run {
Log.e(TAG, "Not supported LLM")
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When is this block invoked?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just want to check if there are no LLM related service.

Copy link
Contributor Author

@niley7464 niley7464 Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://tourspace.tistory.com/208

I was making mistake when using 'let' incorrectly.

1. When not to use LET
1-1. 불변 변수의 null check (When just checking null for an immutable variable, don't use LET)

Updated not to use let

fun runLlama2(input: String, hostAddress: String, servicePort: Int, newDataCb: NewDataCb) {
val port = findPort()
val desc =
"appsrc name=srcx ! application/octet-stream ! tensor_converter ! other/tensors,format=flexible ! tensor_query_client host=${hostAddress} port=${port} dest-host=${hostAddress} dest-port=${servicePort} timeout=1000000 ! tensor_sink name=sinkx"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"appsrc name=srcx ! application/octet-stream ! tensor_converter ! other/tensors,format=flexible ! tensor_query_client host=${hostAddress} port=${port} dest-host=${hostAddress} dest-port=${servicePort} timeout=1000000 ! tensor_sink name=sinkx"
"appsrc name=srcx ! application/octet-stream ! tensor_converter ! other/tensors,format=flexible ! tensor_query_client host=$hostAddress port=$port dest-host=$hostAddress dest-port=$servicePort timeout=1000000 ! tensor_sink name=sinkx"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated
Thank you 😅

This patch adds a draft of the LLM example.
It uses llama2 model to generate text using input prompt.

Signed-off-by: Yelin Jeong <[email protected]>
@wooksong wooksong merged commit 6865e96 into nnstreamer:main Oct 4, 2024
2 checks passed
@niley7464 niley7464 deleted the llama-example branch October 4, 2024 05:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants