Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from StarRocks:main #4

Merged
merged 105 commits into from
May 9, 2024
Merged

[pull] main from StarRocks:main #4

merged 105 commits into from
May 9, 2024

Conversation

pull[bot]
Copy link

@pull pull bot commented Apr 29, 2024

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

rickif and others added 16 commits April 29, 2024 11:03
…action_fill_data_cache config (#44946)

Signed-off-by: starrocks-xupeng <[email protected]>
User should be able to show grants for all predecessor roles which it owned,
not just the roles it owned directly

Signed-off-by: Dejun Xia <[email protected]>
Signed-off-by: ABingHuang <[email protected]>
Signed-off-by: ABing <[email protected]>
Co-authored-by: Seaven <[email protected]>
@github-actions github-actions bot added title needs [type] documentation Improvements or additions to documentation labels Apr 29, 2024
@pull pull bot added ⤵️ pull and removed documentation Improvements or additions to documentation title needs [type] labels Apr 29, 2024
@github-actions github-actions bot added the documentation Improvements or additions to documentation label Apr 29, 2024
trueeyu and others added 29 commits May 7, 2024 16:55
## Why I'm doing:
The non-fair ReentrantLock in java may cause lock starvation, the read lock will never be hold in the following code.
```java
        ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
        new Thread(() -> {
            while (true) {
                lock.writeLock().lock();
                System.out.println("write");
                try {
                    Thread.sleep(10);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                lock.writeLock().unlock();
            }
        }).start();

        new Thread(() -> {
            while (true) {
                lock.readLock().lock();
                System.out.println("read");
                try {
                    Thread.sleep(10);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                lock.readLock().unlock();
            }
        }).start();
```

## What I'm doing:
Change all the locks to fair lock.

Signed-off-by: gengjun-git <[email protected]>
Why I'm doing:
FE will always wait for task to be finished, for report_task_worker won't work when node as compute node if task timeout in be.

What I'm doing:
Enable report_task_worker_pool and task should be reported to FE even if timeout.

Signed-off-by: smartlxh <[email protected]>
Why I'm doing:
In order to be able to reduce reads and writes to the object store, we added COMBINED TXN LOG in #42542, i.e., only one txn log file is written per partition instead of one per Tablet. However, in #42542, only stream load and routine load support combined txn log, the common insert into and broker load can not use combined txn log.

What I'm doing:
Support combined txn log for insert and broker load.
Avoid sending invalid txn log deletion requests when combined txn log and batch publish are turned on at the same time

Signed-off-by: Alex Zhu <[email protected]>
Signed-off-by: evelyn.zhaojie <[email protected]>
Co-authored-by: evelyn.zhaojie <[email protected]>
…cates into fe for materialized views/task run status (#44981)

Signed-off-by: shuming.li <[email protected]>
@node node merged commit 7410d89 into vivo:main May 9, 2024
6 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⤵️ pull documentation Improvements or additions to documentation title needs [type]
Projects
None yet
Development

Successfully merging this pull request may close these issues.