Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

segmentation fault 0.2.84 when using function calling #1636

Closed
4 tasks done
axel7083 opened this issue Jul 29, 2024 · 8 comments
Closed
4 tasks done

segmentation fault 0.2.84 when using function calling #1636

axel7083 opened this issue Jul 29, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@axel7083
Copy link

axel7083 commented Jul 29, 2024

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior (0.2.82)

INFO:     Started server process [2]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
from_string grammar:
age-kv ::= ["] [a] [g] [e] ["] space [:] space integer 
space ::= space_41 
integer ::= integer_5 space 
char ::= [^"\] | [\] char_4 
char_4 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] 
integer_5 ::= integer_6 integral-part 
integer_6 ::= [-] | 
integral-part ::= [0-9] | [1-9] integral-part_37 
integral-part_8 ::= [0-9] integral-part_36 
integral-part_9 ::= [0-9] integral-part_35 
integral-part_10 ::= [0-9] integral-part_34 
integral-part_11 ::= [0-9] integral-part_33 
integral-part_12 ::= [0-9] integral-part_32 
integral-part_13 ::= [0-9] integral-part_31 
integral-part_14 ::= [0-9] integral-part_30 
integral-part_15 ::= [0-9] integral-part_29 
integral-part_16 ::= [0-9] integral-part_28 
integral-part_17 ::= [0-9] integral-part_27 
integral-part_18 ::= [0-9] integral-part_26 
integral-part_19 ::= [0-9] integral-part_25 
integral-part_20 ::= [0-9] integral-part_24 
integral-part_21 ::= [0-9] integral-part_23 
integral-part_22 ::= [0-9] 
integral-part_23 ::= integral-part_22 | 
integral-part_24 ::= integral-part_21 | 
integral-part_25 ::= integral-part_20 | 
integral-part_26 ::= integral-part_19 | 
integral-part_27 ::= integral-part_18 | 
integral-part_28 ::= integral-part_17 | 
integral-part_29 ::= integral-part_16 | 
integral-part_30 ::= integral-part_15 | 
integral-part_31 ::= integral-part_14 | 
integral-part_32 ::= integral-part_13 | 
integral-part_33 ::= integral-part_12 | 
integral-part_34 ::= integral-part_11 | 
integral-part_35 ::= integral-part_10 | 
integral-part_36 ::= integral-part_9 | 
integral-part_37 ::= integral-part_8 | 
name-kv ::= ["] [n] [a] [m] [e] ["] space [:] space string 
string ::= ["] string_42 ["] space 
root ::= [{] space name-kv [,] space age-kv [}] space 
space_41 ::= [ ] | 
string_42 ::= char string_42 | 

age-kv ::= "\"age\"" space ":" space integer
char ::= [^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F])
integer ::= ("-"? integral-part) space
integral-part ::= [0-9] | [1-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9])?)?)?)?)?)?)?)?)?)?)?)?)?)?)?
name-kv ::= "\"name\"" space ":" space string
root ::= "{" space name-kv "," space age-kv "}" space
space ::= " "?
string ::= "\"" char* "\"" space
/opt/app-root/lib64/python3.11/site-packages/llama_cpp/llama.py:1054: RuntimeWarning: Detected duplicate leading "<|begin_of_text|>" in prompt, this will likely reduce response quality, consider removing it...
  warnings.warn(

llama_print_timings:        load time =    3918.59 ms
llama_print_timings:      sample time =     131.32 ms /    14 runs   (    9.38 ms per token,   106.61 tokens per second)
llama_print_timings: prompt eval time =    3918.17 ms /   137 tokens (   28.60 ms per token,    34.97 tokens per second)
llama_print_timings:        eval time =    1502.27 ms /    13 runs   (  115.56 ms per token,     8.65 tokens per second)
llama_print_timings:       total time =    5617.33 ms /   150 tokens

Current Behavior (0.2.84)

transformers==4.43.3

INFO:     Started server process [60]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     127.0.0.1:53702 - "GET /v1/models HTTP/1.1" 200 OK
from_string grammar:
age-kv ::= ["] [a] [g] [e] ["] space [:] space integer 
space ::= space_41 
integer ::= integer_5 space 
char ::= [^"\] | [\] char_4 
char_4 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] 
integer_5 ::= integer_6 integral-part 
integer_6 ::= [-] | 
integral-part ::= [0-9] | [1-9] integral-part_37 
integral-part_8 ::= [0-9] integral-part_36 
integral-part_9 ::= [0-9] integral-part_35 
integral-part_10 ::= [0-9] integral-part_34 
integral-part_11 ::= [0-9] integral-part_33 
integral-part_12 ::= [0-9] integral-part_32 
integral-part_13 ::= [0-9] integral-part_31 
integral-part_14 ::= [0-9] integral-part_30 
integral-part_15 ::= [0-9] integral-part_29 
integral-part_16 ::= [0-9] integral-part_28 
integral-part_17 ::= [0-9] integral-part_27 
integral-part_18 ::= [0-9] integral-part_26 
integral-part_19 ::= [0-9] integral-part_25 
integral-part_20 ::= [0-9] integral-part_24 
integral-part_21 ::= [0-9] integral-part_23 
integral-part_22 ::= [0-9] 
integral-part_23 ::= integral-part_22 | 
integral-part_24 ::= integral-part_21 | 
integral-part_25 ::= integral-part_20 | 
integral-part_26 ::= integral-part_19 | 
integral-part_27 ::= integral-part_18 | 
integral-part_28 ::= integral-part_17 | 
integral-part_29 ::= integral-part_16 | 
integral-part_30 ::= integral-part_15 | 
integral-part_31 ::= integral-part_14 | 
integral-part_32 ::= integral-part_13 | 
integral-part_33 ::= integral-part_12 | 
integral-part_34 ::= integral-part_11 | 
integral-part_35 ::= integral-part_10 | 
integral-part_36 ::= integral-part_9 | 
integral-part_37 ::= integral-part_8 | 
name-kv ::= ["] [n] [a] [m] [e] ["] space [:] space string 
string ::= ["] string_42 ["] space 
root ::= [{] space name-kv [,] space age-kv [}] space 
space_41 ::= [ ] | 
string_42 ::= char string_42 | 

age-kv ::= "\"age\"" space ":" space integer
char ::= [^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F])
integer ::= ("-"? integral-part) space
integral-part ::= [0-9] | [1-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9] ([0-9])?)?)?)?)?)?)?)?)?)?)?)?)?)?)?
name-kv ::= "\"name\"" space ":" space string
root ::= "{" space name-kv "," space age-kv "}" space
space ::= " "?
string ::= "\"" char* "\"" space
/opt/app-root/lib64/python3.11/site-packages/llama_cpp/llama.py:1129: RuntimeWarning: Detected duplicate leading "<|begin_of_text|>" in prompt, this will likely reduce response quality, consider removing it...
  warnings.warn(
Segmentation fault (core dumped)

transformers==4.41.2

/opt/app-root/lib64/python3.11/site-packages/llama_cpp/llama.py:1129: RuntimeWarning: Detected duplicate leading "<|begin_of_text|>" in prompt, this will likely reduce response quality, consider removing it...
  warnings.warn(
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 5018) >= this->size() (which is 1)
./run.sh: line 23:     2 Aborted                 (core dumped) python -m llama_cpp.server --model ${MODEL_PATH} --host ${HOST:=0.0.0.0} --port ${PORT:=8001} --n_gpu_layers ${GPU_LAYERS:=0} --clip_model_path ${CLIP_MODEL_PATH:=None} --chat_format ${CHAT_FORMAT:=llama-2} ${PRETRAINED_MODEL_PATH:=} ${HF_PRETRAINED_MODEL:+--hf_pretrained_model_name_or_path ${HF_PRETRAINED_MODEL}} --interrupt_requests ${INTERRUPT_REQUESTS:=False}

Environment and Context

Running in container with the following dependencies

llama-cpp-python[server]==0.2.84
transformers==4.43.3
pip==24.0
  • Physical (or virtual) hardware you are using, e.g. for Linux:
Architecture:             x86_64
  CPU op-mode(s):         32-bit, 64-bit
  Address sizes:          39 bits physical, 48 bits virtual
  Byte Order:             Little Endian
CPU(s):                   16
  On-line CPU(s) list:    0-15
Vendor ID:                GenuineIntel
  Model name:             11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
    CPU family:           6
    Model:                141
    Thread(s) per core:   2
    Core(s) per socket:   8
    Socket(s):            1
    Stepping:             1
    CPU(s) scaling MHz:   21%
    CPU max MHz:          4800.0000
    CPU min MHz:          800.0000
    BogoMIPS:             4992.00
    Flags:                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology 
                          nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand la
                          hf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed 
                          adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req
                           vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization features:  
  Virtualization:         VT-x
Caches (sum of all):      
  L1d:                    384 KiB (8 instances)
  L1i:                    256 KiB (8 instances)
  L2:                     10 MiB (8 instances)
  L3:                     24 MiB (1 instance)
NUMA:                     
  NUMA node(s):           1
  NUMA node0 CPU(s):      0-15
Vulnerabilities:          
  Gather data sampling:   Mitigation; Microcode
  Itlb multihit:          Not affected
  L1tf:                   Not affected
  Mds:                    Not affected
  Meltdown:               Not affected
  Mmio stale data:        Not affected
  Reg file data sampling: Not affected
  Retbleed:               Not affected
  Spec rstack overflow:   Not affected
  Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
  Srbds:                  Not affected
  Tsx async abort:        Not affected

  • Operating System, e.g. for Linux:
Linux 0647ff0e25b5 6.9.9-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jul 11 19:29:01 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • SDK version, e.g. for Linux:
$ python3 --version
python3 --version
$ make --version
make --version
$ g++ --version
g++ --version

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Start container

$> podman run -it --entrypoint=/bin/bash -v /home/models:/models:Z  -p 8000:8000 registry.access.redhat.com/ubi9/python-311:1-66.1720018730 
$: # install dependencies
$: python3 -m llama_cpp.server --model /models/functionary-small-v2.5.Q4_0.gguf --chat_format functionary-v2 --hf_pretrained_model_name_or_path meetkai/functionary-small-v2.5-GGUF --host 0.0.0.0 --port 8000
@yamikumo-DSD
Copy link

yamikumo-DSD commented Jul 29, 2024

I got the same problem (seg fault) when using LlamaGrammar. Function calling is implemented over LlamaGrammar, so I think our problems are same.
I rollback to 0.2.83, which seems stable now.
The latest version kills python kernel with LlamaGrammar #1623

@axel7083
Copy link
Author

transformers==4.41.2

IMO the problem is not related to transformers library. However the log seems a bit more verbose with an old version of transformers library.

transformers llama-cpp-python[server] result
4.43.3 0.2.82 🟢
4.43.3 0.2.84 🟥

@axel7083
Copy link
Author

I got the same problem (seg fault) when using LlamaGrammar.

@yamikumo-DSD I reproduced your issue, I commented on it! Thanks

@ExtReMLapin
Copy link
Contributor

Same issue, had to rollback

@ncho-sqd
Copy link

ncho-sqd commented Jul 31, 2024

same issue - what would a fix look like for this? is there a quant for llama3.1 thats done without the recent rope fix? <=0.2.83 gives me mismatched tensor error w/ llama3.1.

@handshape
Copy link

Work is underway on this one: #1637

@ExtReMLapin
Copy link
Contributor

Yeah but to be frank, i'm a little stuck on the PR, it doesn't work and it's not a priority on my end at the office, help would be apreciated on this PR

@ExtReMLapin
Copy link
Contributor

Also, this pr introduces . and {quantity} operators, it doesn't fix segfaults

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants