Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Image recognition #3087

Closed
1 of 4 tasks
kalle07 opened this issue Jun 22, 2024 · 6 comments
Closed
1 of 4 tasks

bug: Image recognition #3087

kalle07 opened this issue Jun 22, 2024 · 6 comments
Assignees
Labels
type: bug Something isn't working

Comments

@kalle07
Copy link

kalle07 commented Jun 22, 2024

  • I have searched the existing issues

Current behavior

error log below

btw. same model and same mmproject-file works with koboldcpp , may you can copy paste ;)

Minimum reproduction step

choose model
YOUR hosted
LLAVA 7B
attach a jpg "512x512"

Expected behavior

...

Screenshots / Logs

2024-06-22T11:34:32.434Z [CORTEX]::Debug: Request to kill cortex2024-06-22T11:34:32.440Z [CORTEX]::Debug: cortex process is terminated
2024-06-22T11:39:43.866Z [SPECS]::Version: 0.5.1
2024-06-22T11:39:43.867Z [SPECS]::Machine: x86_64
2024-06-22T11:39:43.867Z [SPECS]::Endianness: LE
2024-06-22T11:39:43.866Z [SPECS]::CPUs: [{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":3867328,"nice":0,"sys":3539187,"idle":9017531,"irq":1018109}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4838406,"nice":0,"sys":1642953,"idle":9942484,"irq":34453}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5519609,"nice":0,"sys":2000546,"idle":8903687,"irq":27984}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4872796,"nice":0,"sys":1642296,"idle":9908750,"irq":26093}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5347093,"nice":0,"sys":1420718,"idle":9656031,"irq":33109}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4810140,"nice":0,"sys":1254828,"idle":10358875,"irq":34515}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5317484,"nice":0,"sys":1446343,"idle":9660015,"irq":33125}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4916453,"nice":0,"sys":1289843,"idle":10217531,"irq":34453}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5031203,"nice":0,"sys":1353562,"idle":10039062,"irq":27750}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4791078,"nice":0,"sys":1192718,"idle":10440031,"irq":30843}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5097828,"nice":0,"sys":1237109,"idle":10088890,"irq":29093}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5281687,"nice":0,"sys":1214156,"idle":9927984,"irq":23765}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5203218,"nice":0,"sys":1525718,"idle":9694890,"irq":18500}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5202234,"nice":0,"sys":1436453,"idle":9785140,"irq":20562}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5402796,"nice":0,"sys":1446109,"idle":9574921,"irq":19265}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5387609,"nice":0,"sys":1350750,"idle":9685453,"irq":17296}}]
2024-06-22T11:39:43.867Z [SPECS]::Parallelism: 16
2024-06-22T11:39:43.867Z [SPECS]::Free Mem: 54787137536
2024-06-22T11:39:43.867Z [SPECS]::Total Mem: 68598566912
2024-06-22T11:39:43.867Z [SPECS]::OS Version: Windows 10 Pro
2024-06-22T11:39:43.867Z [SPECS]::OS Release: 10.0.19045
2024-06-22T11:39:43.869Z [APP]::{"notify":true,"run_mode":"gpu","nvidia_driver":{"exist":true,"version":"555.99"},"cuda":{"exist":true,"version":"12"},"gpus":[{"id":"0","vram":"16380","name":"NVIDIA GeForce RTX 4060 Ti","arch":"ada"}],"gpu_highest_vram":"0","gpus_in_use":["0"],"is_initial":false,"vulkan":false}
2024-06-22T11:39:43.867Z [SPECS]::OS Platform: win32
2024-06-22T11:39:43.867Z [SPECS]::0, 16380, NVIDIA GeForce RTX 4060 Ti

2024-06-22T11:40:40.935Z [CORTEX]::CPU information - 9
2024-06-22T11:40:40.935Z [CORTEX]::Debug: Request to kill cortex
2024-06-22T11:40:40.954Z [CORTEX]::Debug: cortex process is terminated
2024-06-22T11:40:40.955Z [CORTEX]::Debug: Spawn cortex at path: C:\Users\kallemst\jan\extensions@janhq\inference-cortex-extension\dist\bin\win-cuda-12-0\cortex-cpp.exe, and args: 1,127.0.0.1,3928
2024-06-22T11:40:40.955Z [APP]::C:\Users\kallemst\jan\extensions@janhq\inference-cortex-extension\dist\bin\win-cuda-12-0
2024-06-22T11:40:40.955Z [CORTEX]::Debug: Spawning cortex subprocess...
2024-06-22T11:40:41.075Z [CORTEX]::Debug: cortex is ready
2024-06-22T11:40:41.076Z [CORTEX]::Debug: Loading model with params {"cpu_threads":9,"vision_model":true,"text_model":false,"ctx_len":2048,"prompt_template":"\n### Instruction:\n{prompt}\n### Response:\n","llama_model_path":"C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf","mmproj":"C:\Users\kallemst\jan\models\llava-7b\mmproj-model-f16.gguf","user_prompt":"\n### Instruction:\n","ai_prompt":"\n### Response:\n","model":"llava-7b","ngl":100}
2024-06-22T11:40:41.144Z [CORTEX]::Debug: 20240622 11:40:40.986000 UTC 3448 INFO cortex-cpp version: default_version - main.cc:73
20240622 11:40:40.986000 UTC 3448 INFO cortex.llamacpp version: 0.1.17 - main.cc:78
20240622 11:40:40.986000 UTC 3448 INFO Server started, listening at: 127.0.0.1:3928 - main.cc:81
20240622 11:40:40.986000 UTC 3448 INFO Please load your model - main.cc:82
20240622 11:40:40.986000 UTC 3448 INFO Number of thread is:16 - main.cc:89
20240622 11:40:41.083000 UTC 13412 INFO CPU instruction set: fpu = 1| mmx = 1| sse = 1| sse2 = 1| sse3 = 1| ssse3 = 1| sse4_1 = 1| sse4_2 = 1| pclmulqdq = 1| avx = 1| avx2 = 1| avx512_f = 0| avx512_dq = 0| avx512_ifma = 0| avx512_pf = 0| avx512_er = 0| avx512_cd = 0| avx512_bw = 0| has_avx512_vl = 0| has_avx512_vbmi = 0| has_avx512_vbmi2 = 0| avx512_vnni = 0| avx512_bitalg = 0| avx512_vpopcntdq = 0| avx512_4vnniw = 0| avx512_4fmaps = 0| avx512_vp2intersect = 0| aes = 1| f16c = 1| - server.cc:272
20240622 11:40:41.150000 UTC 13412 INFO Loaded engine: cortex.llamacpp - server.cc:299
20240622 11:40:41.150000 UTC 13412 INFO MMPROJ FILE detected, multi-model enabled! - llama_engine.cc:287
20240622 11:40:41.150000 UTC 13412 DEBUG [LoadModelImpl] cache_type: f16 - llama_engine.cc:347
20240622 11:40:41.150000 UTC 13412 DEBUG [LoadModelImpl] Enabled Flash Attention - llama_engine.cc:356
20240622 11:40:41.150000 UTC 13412 DEBUG [LoadModelImpl] stop: null

  • llama_engine.cc:377
    {"timestamp":1719056441,"level":"INFO","function":"LoadModelImpl","line":400,"message":"system info","n_threads":9,"total_threads":16,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}

2024-06-22T11:40:41.540Z [CORTEX]::Error: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = 1.6
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 15
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama

2024-06-22T11:40:41.547Z [CORTEX]::Error: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...

2024-06-22T11:40:41.563Z [CORTEX]::Error: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...

2024-06-22T11:40:41.565Z [CORTEX]::Error: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors

2024-06-22T11:40:41.580Z [CORTEX]::Error: llm_load_vocab: special tokens cache size = 259

2024-06-22T11:40:41.584Z [CORTEX]::Error: llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 4.07 GiB (4.83 BPW)
llm_load_print_meta: general.name = 1.6
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'

2024-06-22T11:40:41.597Z [CORTEX]::Error: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes

2024-06-22T11:40:41.674Z [CORTEX]::Error: llm_load_tensors: ggml ctx size = 0.30 MiB

2024-06-22T11:40:42.013Z [CORTEX]::Error: llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: CUDA0 buffer size = 4095.05 MiB
.
2024-06-22T11:40:42.118Z [CORTEX]::Error: .
2024-06-22T11:40:42.133Z [CORTEX]::Error: .
2024-06-22T11:40:42.144Z [CORTEX]::Error: .
2024-06-22T11:40:42.152Z [CORTEX]::Error: .
2024-06-22T11:40:42.167Z [CORTEX]::Error: .
2024-06-22T11:40:42.179Z [CORTEX]::Error: .
2024-06-22T11:40:42.187Z [CORTEX]::Error: .
2024-06-22T11:40:42.193Z [CORTEX]::Error: .
2024-06-22T11:40:42.213Z [CORTEX]::Error: .
2024-06-22T11:40:42.221Z [CORTEX]::Error: .
2024-06-22T11:40:42.225Z [CORTEX]::Error: .
2024-06-22T11:40:42.247Z [CORTEX]::Error: .
2024-06-22T11:40:42.257Z [CORTEX]::Error: .
2024-06-22T11:40:42.269Z [CORTEX]::Error: .
2024-06-22T11:40:42.285Z [CORTEX]::Error: .
2024-06-22T11:40:42.289Z [CORTEX]::Error: .
2024-06-22T11:40:42.308Z [CORTEX]::Error: .
2024-06-22T11:40:42.315Z [CORTEX]::Error: .
2024-06-22T11:40:42.322Z [CORTEX]::Error: .
2024-06-22T11:40:42.341Z [CORTEX]::Error: .
2024-06-22T11:40:42.349Z [CORTEX]::Error: .
2024-06-22T11:40:42.356Z [CORTEX]::Error: .
2024-06-22T11:40:42.372Z [CORTEX]::Error: .
2024-06-22T11:40:42.380Z [CORTEX]::Error: .
2024-06-22T11:40:42.386Z [CORTEX]::Error: .
2024-06-22T11:40:42.402Z [CORTEX]::Error: .
2024-06-22T11:40:42.410Z [CORTEX]::Error: .
2024-06-22T11:40:42.424Z [CORTEX]::Error: .
2024-06-22T11:40:42.436Z [CORTEX]::Error: .
2024-06-22T11:40:42.444Z [CORTEX]::Error: .
2024-06-22T11:40:42.450Z [CORTEX]::Error: .
2024-06-22T11:40:42.466Z [CORTEX]::Error: .
2024-06-22T11:40:42.474Z [CORTEX]::Error: .
2024-06-22T11:40:42.489Z [CORTEX]::Error: .
2024-06-22T11:40:42.497Z [CORTEX]::Error: .
2024-06-22T11:40:42.505Z [CORTEX]::Error: .
2024-06-22T11:40:42.519Z [CORTEX]::Error: .
2024-06-22T11:40:42.531Z [CORTEX]::Error: .
2024-06-22T11:40:42.539Z [CORTEX]::Error: .
2024-06-22T11:40:42.554Z [CORTEX]::Error: .
2024-06-22T11:40:42.562Z [CORTEX]::Error: .
2024-06-22T11:40:42.570Z [CORTEX]::Error: .
2024-06-22T11:40:42.585Z [CORTEX]::Error: .
2024-06-22T11:40:42.593Z [CORTEX]::Error: .
2024-06-22T11:40:42.604Z [CORTEX]::Error: .
2024-06-22T11:40:42.616Z [CORTEX]::Error: .
2024-06-22T11:40:42.628Z [CORTEX]::Error: .
2024-06-22T11:40:42.637Z [CORTEX]::Error: .
2024-06-22T11:40:42.652Z [CORTEX]::Error: .
2024-06-22T11:40:42.660Z [CORTEX]::Error: .
2024-06-22T11:40:42.670Z [CORTEX]::Error: .
2024-06-22T11:40:42.682Z [CORTEX]::Error: .
2024-06-22T11:40:42.698Z [CORTEX]::Error: .
2024-06-22T11:40:42.702Z [CORTEX]::Error: .
2024-06-22T11:40:42.713Z [CORTEX]::Error: .
2024-06-22T11:40:42.724Z [CORTEX]::Error: .
2024-06-22T11:40:42.735Z [CORTEX]::Error: .
2024-06-22T11:40:42.746Z [CORTEX]::Error: .
2024-06-22T11:40:42.763Z [CORTEX]::Error: .
2024-06-22T11:40:42.767Z [CORTEX]::Error: .
2024-06-22T11:40:42.785Z [CORTEX]::Error: .
2024-06-22T11:40:42.793Z [CORTEX]::Error: .
2024-06-22T11:40:42.800Z [CORTEX]::Error: .
2024-06-22T11:40:42.820Z [CORTEX]::Error: .
2024-06-22T11:40:42.828Z [CORTEX]::Error: .
2024-06-22T11:40:42.831Z [CORTEX]::Error: .
2024-06-22T11:40:42.850Z [CORTEX]::Error: .
2024-06-22T11:40:42.858Z [CORTEX]::Error: .
2024-06-22T11:40:42.864Z [CORTEX]::Error: .
2024-06-22T11:40:42.880Z [CORTEX]::Error: .
2024-06-22T11:40:42.888Z [CORTEX]::Error: .
2024-06-22T11:40:42.903Z [CORTEX]::Error: .
2024-06-22T11:40:42.914Z [CORTEX]::Error: .
2024-06-22T11:40:42.922Z [CORTEX]::Error: .
2024-06-22T11:40:42.928Z [CORTEX]::Error: .
2024-06-22T11:40:42.944Z [CORTEX]::Error: .
2024-06-22T11:40:42.952Z [CORTEX]::Error: .
2024-06-22T11:40:42.966Z [CORTEX]::Error: .
2024-06-22T11:40:42.974Z [CORTEX]::Error: .
2024-06-22T11:40:42.981Z [CORTEX]::Error: .
2024-06-22T11:40:42.996Z [CORTEX]::Error: .
2024-06-22T11:40:43.007Z [CORTEX]::Error: .
2024-06-22T11:40:43.014Z [CORTEX]::Error: .
2024-06-22T11:40:43.028Z [CORTEX]::Error: .
2024-06-22T11:40:43.040Z [CORTEX]::Error: .
2024-06-22T11:40:43.047Z [CORTEX]::Error: .
2024-06-22T11:40:43.053Z [CORTEX]::Error: .
2024-06-22T11:40:43.072Z [CORTEX]::Error: .
2024-06-22T11:40:43.080Z [CORTEX]::Error: .
2024-06-22T11:40:43.086Z [CORTEX]::Error: .
2024-06-22T11:40:43.105Z [CORTEX]::Error: .
2024-06-22T11:40:43.115Z [CORTEX]::Error: .
2024-06-22T11:40:43.127Z [CORTEX]::Error: .
2024-06-22T11:40:43.138Z [CORTEX]::Error: .
2024-06-22T11:40:43.145Z [CORTEX]::Error: .

2024-06-22T11:40:43.146Z [CORTEX]::Error: llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 2048
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base = 1000000.0

2024-06-22T11:40:43.146Z [CORTEX]::Error: llama_new_context_with_model: freq_scale = 1

2024-06-22T11:40:43.153Z [CORTEX]::Error: llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB

2024-06-22T11:40:43.154Z [CORTEX]::Error: llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB

2024-06-22T11:40:43.165Z [CORTEX]::Error: llama_new_context_with_model: CUDA0 compute buffer size = 344.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 48.02 MiB
llama_new_context_with_model: graph nodes = 903
llama_new_context_with_model: graph splits = 2

2024-06-22T11:40:43.324Z [CORTEX]::Debug: Load model success with response {}
2024-06-22T11:40:43.327Z [CORTEX]::Debug: Validate model state with response 200
2024-06-22T11:40:43.328Z [CORTEX]::Debug: Validate model state success with response {"model_data":"{"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"logit_bias":[],"min_p":0.05000000074505806,"mirostat":0,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"C:\\Users\\kallemst\\jan\\models\\llava-7b\\llava-v1.6-mistral-7b.Q4_K_M.gguf","n_ctx":2048,"n_keep":0,"n_predict":2,"n_probs":0,"penalize_nl":false,"penalty_prompt_tokens":[],"presence_penalty":0.0,"repeat_last_n":64,"repeat_penalty":1.0,"seed":4294967295,"stop":[],"stream":false,"temperature":0.800000011920929,"tfs_z":1.0,"top_k":40,"top_p":0.949999988079071,"typical_p":1.0,"use_penalty_prompt_tokens":false}","model_loaded":true}
2024-06-22T11:40:43.352Z [CORTEX]::Debug: 20240622 11:40:41.152000 UTC 13412 DEBUG [LoadModel] Multi Modal Mode Enabled - llama_server_context.cc:152
20240622 11:40:43.222000 UTC 13412 DEBUG [Initialize] Available slots: - llama_server_context.cc:208
20240622 11:40:43.222000 UTC 13412 DEBUG [Initialize] -> Slot 0 - max context: 2048 - llama_server_context.cc:216
20240622 11:40:43.222000 UTC 13412 INFO Started background task here! - llama_server_context.cc:235
20240622 11:40:43.222000 UTC 13412 INFO Warm-up model: llava-7b - llama_engine.cc:794
20240622 11:40:43.222000 UTC 3736 DEBUG [LaunchSlotWithData] slot 0 is processing [task id: 0] - llama_server_context.cc:602
20240622 11:40:43.222000 UTC 3736 INFO kv cache rm [p0, end) - id_slot: 0, task_id: 0, p0: 0 - llama_server_context.cc:1522
20240622 11:40:43.323000 UTC 3736 DEBUG [PrintTimings] PrintTimings: prompt eval time = 52.321ms / 2 tokens (26.1605 ms per token, 38.2255690832 tokens per second) - llama_client_slot.cc:79
20240622 11:40:43.323000 UTC 3736 DEBUG [PrintTimings] PrintTimings: eval time = 53.642 ms / 4 runs (13.4105 ms per token, 74.5684351814 tokens per second)

  • llama_client_slot.cc:86
    20240622 11:40:43.323000 UTC 3736 DEBUG [PrintTimings] PrintTimings: total time = 105.963 ms - llama_client_slot.cc:92
    20240622 11:40:43.323000 UTC 3736 INFO slot released: id_slot: 0, id_task: 0, n_ctx: 2048, n_past: 6, n_system_tokens: 0, n_cache_tokens: 0, truncated: 0 - llama_server_context.cc:1282
    20240622 11:40:43.323000 UTC 3736 DEBUG [UpdateSlots] all slots are idle and system prompt is empty, clear the KV cache - llama_server_context.cc:1228
    20240622 11:40:43.323000 UTC 3736 DEBUG [KvCacheClear] Clear the entire KV cache - llama_server_context.cc:241
    20240622 11:40:43.323000 UTC 13412 INFO {"content":"! This is my first","generation_settings":{"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"logit_bias":[],"min_p":0.05000000074505806,"mirostat":0,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf","n_ctx":2048,"n_keep":0,"n_predict":2,"n_probs":0,"penalize_nl":false,"penalty_prompt_tokens":[],"presence_penalty":0.0,"repeat_last_n":64,"repeat_penalty":1.0,"seed":4294967295,"stop":[],"stream":false,"temperature":0.800000011920929,"tfs_z":1.0,"top_k":40,"top_p":0.949999988079071,"typical_p":1.0,"use_penalty_prompt_tokens":false},"model":"C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf","prompt":"Hello","slot_id":0,"stop":true,"stopped_eos":false,"stopped_limit":true,"stopped_word":false,"stopping_word":"","timings":{"predicted_ms":53.642,"predicted_n":4,"predicted_per_second":74.56843518138771,"predicted_per_token_ms":13.4105,"prompt_ms":52.321,"prompt_n":2,"prompt_per_second":38.225569083159726,"prompt_per_token_ms":26.1605},"tokens_cached":6,"tokens_evaluated":2,"tokens_predicted":4,"truncated":false} - llama_engine.cc:802
    20240622 11:40:43.323000 UTC 13412 INFO Model loaded successfully: llava-7b - llama_engine.cc:203
    20240622 11:40:43.331000 UTC 3512 INFO Model status responded - llama_engine.cc:246
    20240622 11:40:43.343000 UTC 8412 INFO Request 1, model llava-7b: Generating reponse for inference request - llama_engine.cc:451
    20240622 11:40:43.343000 UTC 8412 INFO Request 1: Stop words:null
  • llama_engine.cc:468
    20240622 11:40:43.343000 UTC 8412 INFO Request 1: Base64 image detected - llama_engine.cc:531
    20240622 11:40:43.355000 UTC 8412 INFO Request 1:

Jan version

0.5.1

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Environment details

No response

@kalle07 kalle07 added the type: bug Something isn't working label Jun 22, 2024
@Van-QA
Copy link
Contributor

Van-QA commented Jun 26, 2024

hi @kalle07, can you elaborate more on the issue that you are facing? maybe with a screenshot from Jan app?

@Van-QA Van-QA added the needs info Not enough info, more logs/data required label Jun 26, 2024
@kalle07
Copy link
Author

kalle07 commented Jun 26, 2024

what you wana see ?
an image of the picture i put in the chat and the word hello iv added and the working process "generating response"
and after 10 sec
"Jan’s in beta. Access troubleshooting assistance now."

@dlsniper
Copy link

dlsniper commented Jul 2, 2024

I'm facing the same issue, here are some details:

Using this image:
f22-512

App screenshot:
Screenshot_20240702_131110

I'm on PopOS 22.04 with the latest drivers and so on.


nvidia-smi                                                                                                                                                                                                                                                                                                  !10009
Tue Jul  2 13:13:05 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4080        Off |   00000000:01:00.0  On |                  N/A |
|  0%   43C    P8             23W /  320W |    1175MiB /  16376MiB |     25%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      7174      G   /usr/lib/xorg/Xorg                            466MiB |
|    0   N/A  N/A      7351      G   /usr/bin/kwin_x11                              79MiB |
|    0   N/A  N/A      7516      G   /usr/bin/plasmashell                           63MiB |
|    0   N/A  N/A      7727      G   /usr/bin/konqueror                              4MiB |
|    0   N/A  N/A     14579      G   /usr/bin/firefox                              370MiB |
|    0   N/A  N/A     14988      G   ...seed-version=20240630-180241.146000         84MiB |
|    0   N/A  N/A     39751      G   ...erProcess --variations-seed-version         14MiB |
|    0   N/A  N/A    161630      G   ...erProcess --variations-seed-version         35MiB |
|    0   N/A  N/A    165626      G   ..._64-linux-gnu/libexec/kf5/kioslave5          3MiB |
|    0   N/A  N/A    171111      G   ..._64-linux-gnu/libexec/kf5/kioslave5          3MiB |
+-----------------------------------------------------------------------------------------+


Validating app logs. Next attempt in  120000
2024-07-02T10:11:01.908Z [CORTEX]::CPU information - 12
2024-07-02T10:11:01.908Z [CORTEX]::Debug: Request to kill cortex
2024-07-02T10:11:01.909Z [CORTEX]::Debug: cortex process is terminated
2024-07-02T10:11:01.909Z [CORTEX]::Debug: Spawning cortex subprocess...
2024-07-02T10:11:01.909Z [CORTEX]::Debug: Spawn cortex at path: /home/florin/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cuda-12-0/cortex-cpp, and args: 1,127.0.0.1,3928
2024-07-02T10:11:01.909Z [APP]::/home/florin/jan/extensions/@janhq/inference-cortex-extension/dist/bin/linux-cuda-12-0
2024-07-02T10:11:02.017Z [CORTEX]::Debug: cortex is ready
2024-07-02T10:11:02.017Z [CORTEX]::Debug: Loading model with params {"cpu_threads":12,"vision_model":true,"text_model":false,"ctx_len":2048,"prompt_template":"\n### Instruction:\n{prompt}\n### Response:\n","llama_model_path":"/home/florin/jan/models/llava-13b/llava-v1.6-vicuna-13b.Q4_K_M.gguf","mmproj":"/home/florin/jan/models/llava-13b/mmproj-model-f16.gguf","user_prompt":"\n### Instruction:\n","ai_prompt":"\n### Response:\n","model":"llava-13b","ngl":100}
2024-07-02T10:11:02.047Z [CORTEX]::Debug: 20240702 10:11:01.917795 UTC 170947 INFO  cortex-cpp version: default_version - main.cc:73
20240702 10:11:01.917826 UTC 170947 INFO  cortex.llamacpp version: 0.1.17 - main.cc:78
20240702 10:11:01.917826 UTC 170947 INFO  Server started, listening at: 127.0.0.1:3928 - main.cc:81
20240702 10:11:01.917827 UTC 170947 INFO  Please load your model - main.cc:82
20240702 10:11:01.917831 UTC 170947 INFO  Number of thread is:24 - main.cc:89
20240702 10:11:02.017921 UTC 170959 INFO  CPU instruction set: fpu = 1| mmx = 1| sse = 1| sse2 = 1| sse3 = 1| ssse3 = 1| sse4_1 = 1| sse4_2 = 1| pclmulqdq = 1| avx = 1| avx2 = 1| avx512_f = 1| avx512_dq = 1| avx512_ifma = 1| avx512_pf = 0| avx512_er = 0| avx512_cd = 1| avx512_bw = 1| has_avx512_vl = 1| has_avx512_vbmi = 1| has_avx512_vbmi2 = 1| avx512_vnni = 1| avx512_bitalg = 1| avx512_vpopcntdq = 1| avx512_4vnniw = 0| avx512_4fmaps = 0| avx512_vp2intersect = 0| aes = 1| f16c = 1| - server.cc:272
20240702 10:11:02.046323 UTC 170959 INFO  Loaded engine: cortex.llamacpp - server.cc:299
20240702 10:11:02.046532 UTC 170959 INFO  MMPROJ FILE detected, multi-model enabled! - llama_engine.cc:287
20240702 10:11:02.046561 UTC 170959 DEBUG [LoadModelImpl] cache_type: f16 - llama_engine.cc:347
20240702 10:11:02.046562 UTC 170959 DEBUG [LoadModelImpl] Enabled Flash Attention - llama_engine.cc:356
20240702 10:11:02.046699 UTC 170959 DEBUG [LoadModelImpl] stop: null
 - llama_engine.cc:377
{"timestamp":1719915062,"level":"INFO","function":"LoadModelImpl","line":395,"message":"system info","n_threads":12,"total_threads":24,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}

2024-07-02T10:11:02.254Z [CORTEX]::Error: llama_model_loader: loaded meta data with 22 key-value pairs and 363 tensors from /home/florin/jan/models/llava-13b/llava-v1.6-vicuna-13b.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 5120
llama_model_loader: - kv   4:                          llama.block_count u32              = 40
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 13824
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 40
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 40
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama

2024-07-02T10:11:02.257Z [CORTEX]::Error: llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...

2024-07-02T10:11:02.262Z [CORTEX]::Error: llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...

2024-07-02T10:11:02.262Z [CORTEX]::Error: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   81 tensors
llama_model_loader: - type q4_K:  241 tensors
llama_model_loader: - type q6_K:   41 tensors

2024-07-02T10:11:02.268Z [CORTEX]::Error: llm_load_vocab: special tokens cache size = 259

2024-07-02T10:11:02.270Z [CORTEX]::Error: llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 40
llm_load_print_meta: n_layer          = 40
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 5120
llm_load_print_meta: n_embd_v_gqa     = 5120
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 13824
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0

2024-07-02T10:11:02.270Z [CORTEX]::Error: llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 13B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 7.33 GiB (4.83 BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'

2024-07-02T10:11:02.280Z [CORTEX]::Error: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:

2024-07-02T10:11:02.280Z [CORTEX]::Error:   Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes

2024-07-02T10:11:02.328Z [CORTEX]::Error: llm_load_tensors: ggml ctx size =    0.37 MiB

2024-07-02T10:11:02.699Z [CORTEX]::Error: llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors:        CPU buffer size =    87.89 MiB
llm_load_tensors:      CUDA0 buffer size =  7412.96 MiB
.
2024-07-02T10:11:02.709Z [CORTEX]::Error: .
2024-07-02T10:11:02.710Z [CORTEX]::Error: .
2024-07-02T10:11:02.718Z [CORTEX]::Error: .
2024-07-02T10:11:02.725Z [CORTEX]::Error: .
2024-07-02T10:11:02.729Z [CORTEX]::Error: .
2024-07-02T10:11:02.738Z [CORTEX]::Error: .
2024-07-02T10:11:02.741Z [CORTEX]::Error: .
2024-07-02T10:11:02.750Z [CORTEX]::Error: .
2024-07-02T10:11:02.754Z [CORTEX]::Error: .
2024-07-02T10:11:02.760Z [CORTEX]::Error: .
2024-07-02T10:11:02.770Z [CORTEX]::Error: .
2024-07-02T10:11:02.773Z [CORTEX]::Error: .
2024-07-02T10:11:02.778Z [CORTEX]::Error: .
2024-07-02T10:11:02.786Z [CORTEX]::Error: .
2024-07-02T10:11:02.790Z [CORTEX]::Error: .
2024-07-02T10:11:02.797Z [CORTEX]::Error: .
2024-07-02T10:11:02.803Z [CORTEX]::Error: .
2024-07-02T10:11:02.811Z [CORTEX]::Error: .
2024-07-02T10:11:02.817Z [CORTEX]::Error: .
2024-07-02T10:11:02.821Z [CORTEX]::Error: .
2024-07-02T10:11:02.829Z [CORTEX]::Error: .
2024-07-02T10:11:02.834Z [CORTEX]::Error: .
2024-07-02T10:11:02.840Z [CORTEX]::Error: .
2024-07-02T10:11:02.848Z [CORTEX]::Error: .
2024-07-02T10:11:02.853Z [CORTEX]::Error: .
2024-07-02T10:11:02.859Z [CORTEX]::Error: .
2024-07-02T10:11:02.865Z [CORTEX]::Error: .
2024-07-02T10:11:02.874Z [CORTEX]::Error: .
2024-07-02T10:11:02.877Z [CORTEX]::Error: .
2024-07-02T10:11:02.885Z [CORTEX]::Error: .
2024-07-02T10:11:02.891Z [CORTEX]::Error: .
2024-07-02T10:11:02.896Z [CORTEX]::Error: .
2024-07-02T10:11:02.903Z [CORTEX]::Error: .
2024-07-02T10:11:02.908Z [CORTEX]::Error: .
2024-07-02T10:11:02.918Z [CORTEX]::Error: .
2024-07-02T10:11:02.921Z [CORTEX]::Error: .
2024-07-02T10:11:02.929Z [CORTEX]::Error: .
2024-07-02T10:11:02.935Z [CORTEX]::Error: .
2024-07-02T10:11:02.939Z [CORTEX]::Error: .
2024-07-02T10:11:02.946Z [CORTEX]::Error: .
2024-07-02T10:11:02.952Z [CORTEX]::Error: .
2024-07-02T10:11:02.958Z [CORTEX]::Error: .
2024-07-02T10:11:02.965Z [CORTEX]::Error: .
2024-07-02T10:11:02.970Z [CORTEX]::Error: .
2024-07-02T10:11:02.976Z [CORTEX]::Error: .
2024-07-02T10:11:02.983Z [CORTEX]::Error: .
2024-07-02T10:11:02.990Z [CORTEX]::Error: .
2024-07-02T10:11:02.994Z [CORTEX]::Error: .
2024-07-02T10:11:03.002Z [CORTEX]::Error: .
2024-07-02T10:11:03.009Z [CORTEX]::Error: .
2024-07-02T10:11:03.013Z [CORTEX]::Error: .
2024-07-02T10:11:03.020Z [CORTEX]::Error: .
2024-07-02T10:11:03.026Z [CORTEX]::Error: .
2024-07-02T10:11:03.034Z [CORTEX]::Error: .
2024-07-02T10:11:03.039Z [CORTEX]::Error: .
2024-07-02T10:11:03.046Z [CORTEX]::Error: .
2024-07-02T10:11:03.050Z [CORTEX]::Error: .
2024-07-02T10:11:03.057Z [CORTEX]::Error: .
2024-07-02T10:11:03.064Z [CORTEX]::Error: .
2024-07-02T10:11:03.069Z [CORTEX]::Error: .
2024-07-02T10:11:03.075Z [CORTEX]::Error: .
2024-07-02T10:11:03.081Z [CORTEX]::Error: .
2024-07-02T10:11:03.090Z [CORTEX]::Error: .
2024-07-02T10:11:03.094Z [CORTEX]::Error: .
2024-07-02T10:11:03.100Z [CORTEX]::Error: .
2024-07-02T10:11:03.108Z [CORTEX]::Error: .
2024-07-02T10:11:03.113Z [CORTEX]::Error: .
2024-07-02T10:11:03.119Z [CORTEX]::Error: .
2024-07-02T10:11:03.125Z [CORTEX]::Error: .
2024-07-02T10:11:03.131Z [CORTEX]::Error: .
2024-07-02T10:11:03.139Z [CORTEX]::Error: .
2024-07-02T10:11:03.143Z [CORTEX]::Error: .
2024-07-02T10:11:03.150Z [CORTEX]::Error: .
2024-07-02T10:11:03.156Z [CORTEX]::Error: .
2024-07-02T10:11:03.164Z [CORTEX]::Error: .
2024-07-02T10:11:03.170Z [CORTEX]::Error: .
2024-07-02T10:11:03.174Z [CORTEX]::Error: .
2024-07-02T10:11:03.183Z [CORTEX]::Error: .
2024-07-02T10:11:03.188Z [CORTEX]::Error: .
2024-07-02T10:11:03.194Z [CORTEX]::Error: .
2024-07-02T10:11:03.200Z [CORTEX]::Error: .
2024-07-02T10:11:03.208Z [CORTEX]::Error: .
2024-07-02T10:11:03.214Z [CORTEX]::Error: .
2024-07-02T10:11:03.218Z [CORTEX]::Error: .
2024-07-02T10:11:03.227Z [CORTEX]::Error: .
2024-07-02T10:11:03.230Z [CORTEX]::Error: .
2024-07-02T10:11:03.239Z [CORTEX]::Error: .
2024-07-02T10:11:03.243Z [CORTEX]::Error: .
2024-07-02T10:11:03.248Z [CORTEX]::Error: .
2024-07-02T10:11:03.259Z [CORTEX]::Error: .
2024-07-02T10:11:03.262Z [CORTEX]::Error: .
2024-07-02T10:11:03.267Z [CORTEX]::Error: .
2024-07-02T10:11:03.275Z [CORTEX]::Error: .
2024-07-02T10:11:03.279Z [CORTEX]::Error: .
2024-07-02T10:11:03.287Z [CORTEX]::Error: .
2024-07-02T10:11:03.294Z [CORTEX]::Error: .
2024-07-02T10:11:03.298Z [CORTEX]::Error: .
2024-07-02T10:11:03.307Z [CORTEX]::Error: .
2024-07-02T10:11:03.466Z [CORTEX]::Error: .

2024-07-02T10:11:03.466Z [CORTEX]::Error: llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 2048
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1

2024-07-02T10:11:03.469Z [CORTEX]::Error: llama_kv_cache_init:      CUDA0 KV buffer size =  1600.00 MiB
llama_new_context_with_model: KV self size  = 1600.00 MiB, K (f16):  800.00 MiB, V (f16):  800.00 MiB

2024-07-02T10:11:03.470Z [CORTEX]::Error: llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB

2024-07-02T10:11:03.494Z [CORTEX]::Error: llama_new_context_with_model:      CUDA0 compute buffer size =   360.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    56.02 MiB
llama_new_context_with_model: graph nodes  = 1127
llama_new_context_with_model: graph splits = 2

2024-07-02T10:11:03.623Z [CORTEX]::Debug: Load model success with response {}
2024-07-02T10:11:03.624Z [CORTEX]::Debug: Validate model state with response 200
2024-07-02T10:11:03.624Z [CORTEX]::Debug: Validate model state success with response {"model_data":"{\"frequency_penalty\":0.0,\"grammar\":\"\",\"ignore_eos\":false,\"logit_bias\":[],\"min_p\":0.05000000074505806,\"mirostat\":0,\"mirostat_eta\":0.10000000149011612,\"mirostat_tau\":5.0,\"model\":\"/home/florin/jan/models/llava-13b/llava-v1.6-vicuna-13b.Q4_K_M.gguf\",\"n_ctx\":2048,\"n_keep\":0,\"n_predict\":2,\"n_probs\":0,\"penalize_nl\":false,\"penalty_prompt_tokens\":[],\"presence_penalty\":0.0,\"repeat_last_n\":64,\"repeat_penalty\":1.0,\"seed\":4294967295,\"stop\":[],\"stream\":false,\"temperature\":0.800000011920929,\"tfs_z\":1.0,\"top_k\":40,\"top_p\":0.949999988079071,\"typical_p\":1.0,\"use_penalty_prompt_tokens\":false}","model_loaded":true}
2024-07-02T10:11:03.790Z [CORTEX]::Debug: cortex exited with code: null

@imtuyethan
Copy link
Contributor

What is the latest status? Can't reproduce right? @Van-QA

@kalle07
Copy link
Author

kalle07 commented Aug 28, 2024

at least you can try the version Jun 26/JUL2 like we both had that error ;)
maybe its fixed but who does that ?

@imtuyethan imtuyethan moved this to Icebox in Jan & Cortex Aug 28, 2024
@imtuyethan imtuyethan removed the needs info Not enough info, more logs/data required label Aug 28, 2024
@imtuyethan imtuyethan moved this from Icebox to Planned in Jan & Cortex Aug 28, 2024
@imtuyethan imtuyethan moved this from Planned to Icebox in Jan & Cortex Aug 28, 2024
@imtuyethan imtuyethan changed the title bug: [image recognition] bug: Image recognition Sep 2, 2024
@imtuyethan imtuyethan moved this from Icebox to Planning in Jan & Cortex Sep 2, 2024
@imtuyethan imtuyethan moved this from Planning to Need Investigation in Jan & Cortex Sep 2, 2024
@freelerobot
Copy link
Contributor

dupe janhq/models#47

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
Archived in project
Development

No branches or pull requests

7 participants