Replies: 2 comments 2 replies
-
Initial feedback is that we are using INT for the L_ORDEKEY set sql(8) "CREATE TABLE so if this exceeds the values here https://dev.mysql.com/doc/refman/8.0/en/integer-types.html then that would result in this error and it needs to be changed to BIGINT. |
Beta Was this translation helpful? Give feedback.
1 reply
-
I'll transfer this to an issue to track the updates and get the changes committed. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am currently running HammerDB 4.7 on Ubuntu 22.04.2 Linux with kernel 5.15.0-71-generic. I have installed MySQL 8.0 mysql Ver 8.0.33-0ubuntu0.22.04.1 as well as libmysqlclient21 through the use of the apt package manager. This is a single node configuration.
I ran a build schema workload overnight for a TPROC-H database with a scale factor of 1000 with 64 virtual users. The platform is dual socket and has a total of 128 physical CPU cores, so I assume 64 shouldn't be too high of a number to execute in parallel. I have also done this in the past for other databases and MySQL on other server platforms that had CPU configurations that could support it and had no problems.
When I come back to it, it shows the following error: "Out of range value for column 'L_ORDERKEY' at row 1" error" for 42 of the virtual users. These virtual users seem to be stuck at the "Loading ORDERS and LINEITEM..." steps. The other 22 virtual users don't have any problem and seem to have completed their row insertion with "Loading TPCH TABLES COMPLETE".
I then tried a fresh re-install of everything, but came across the same issue a second time. I understand MySQL supposedly does not have support for data analytics as seen in the documentation here: https://www.hammerdb.com/docs/ch12s02.html#d0e4329
I assumed this statement meant that there is no special configuration that would optimize the database for a data analytics focused query-set, and not that it was simply not possible at all. I would assume you should still be able to create the table just fine and run the workload, even if it doesn't perform potentially as well as other databases that do have configuration options to optimize data analytics workloads.
It is not a storage capacity issue, as the database is storing its data on a mounted XFS file system on a drive that has 3.84 TB of capacity. Running the 'df -h' command as well as 'sudo nvme list' shows that I am using 1.2 TB only, and still have 2.4 TB total of remaining space.
I was wondering if I could get some pointers as to where this is going wrong. Thank you very much.
Beta Was this translation helpful? Give feedback.
All reactions