Skip to content

Commit

Permalink
Merge pull request #84 from cloudwego/release/v0.1.0
Browse files Browse the repository at this point in the history
chore: release v0.1.0
  • Loading branch information
Hchenn authored Dec 1, 2021
2 parents 3304c59 + 350d1ec commit 68a9c4c
Show file tree
Hide file tree
Showing 27 changed files with 646 additions and 267 deletions.
38 changes: 38 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]

**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]

**Additional context**
Add any other context about the problem here.
20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.
53 changes: 11 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ We developed the RPC framework [KiteX][KiteX] and HTTP
framework [Hertz][Hertz] (to be open sourced) based
on [Netpoll][Netpoll], both with industry-leading performance.

[Examples][Netpoll-Benchmark] show how to build RPC client and server
[Examples][netpoll-benchmark] show how to build RPC client and server
using [Netpoll][Netpoll].

For more information, please refer to [Document](#document).
Expand Down Expand Up @@ -56,49 +56,19 @@ For more information, please refer to [Document](#document).

# Performance

Benchmark is not a digital game, it should meet the requirements of industrial use first. In the RPC scenario,
concurrent calls and waiting timeout are necessary support items.
Benchmark should meet the requirements of industrial use.
In the RPC scenario, concurrency and timeout are necessary support items.

Therefore, we set that the benchmark should meet the following conditions:
We provide the [netpoll-benchmark][netpoll-benchmark] project to track and compare
the performance of [Netpoll][Netpoll] and other frameworks under different conditions for reference.

1. Support concurrent calls, support timeout(1s)
2. Use protocol: header(4 bytes) indicates the total length of payload
More benchmarks reference [kitex-benchmark][kitex-benchmark] and [hertz-benchmark][hertz-benchmark] (open source soon).

Similar repositories such as [net][net]
, [evio][evio], [gnet][gnet]. We compared their performance
through [Benchmarks][Benchmarks], as shown below.

For more benchmark reference [Netpoll-Benchmark][Netpoll-Benchmark]
, [KiteX-Benchmark][KiteX-Benchmark] and [Hertz-Benchmark][Hertz-Benchmark] .

### Environment

* CPU: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, 4 cores
* Memory: 8GB
* OS: Debian 5.4.56.bsk.1-amd64 x86_64 GNU/Linux
* Go: 1.15.4

### Concurrent Performance (Echo 1KB)

![image](docs/images/c_tp99.png)
![image](docs/images/c_qps.png)

### Transport Performance (concurrent=100)

![image](docs/images/s_tp99.png)
![image](docs/images/s_qps.png)

### Benchmark Conclusion

Compared with [net][net]
, [Netpoll][Netpoll] latency about 34% and qps about 110%
(continue to pressurize, net latency is too high to reference)

# Document
# Reference

* [Official Website](https://www.cloudwego.io)
* [Getting Started](docs/guide/guide_en.md)
* [Design](docs/reference/design_en.md)
* [Change Log](docs/reference/change_log.md)
* [Why DATA RACE](docs/reference/explain.md)

[Netpoll]: https://github.com/cloudwego/netpoll
Expand All @@ -110,10 +80,9 @@ Compared with [net][net]
[KiteX]: https://github.com/cloudwego/kitex
[Hertz]: https://github.com/cloudwego/hertz

[Benchmarks]: https://github.com/cloudwego/netpoll-benchmark
[Netpoll-Benchmark]: https://github.com/cloudwego/netpoll-benchmark
[KiteX-Benchmark]: https://github.com/cloudwego/kitex
[Hertz-Benchmark]: https://github.com/cloudwego/hertz
[netpoll-benchmark]: https://github.com/cloudwego/netpoll-benchmark
[kitex-benchmark]: https://github.com/cloudwego/kitex
[hertz-benchmark]: https://github.com/cloudwego/hertz

[ByteDance]: https://www.bytedance.com
[Redis]: https://redis.io
Expand Down
48 changes: 9 additions & 39 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ goroutine,大幅增加调度开销。此外,[net.Conn][net.Conn] 没有提
基于 [Netpoll][Netpoll] 开发的 RPC 框架 [KiteX][KiteX] 和 HTTP
框架 [Hertz][Hertz] (即将开源),性能均业界领先。

[范例][Netpoll-Benchmark] 展示了如何使用 [Netpoll][Netpoll]
[范例][netpoll-benchmark] 展示了如何使用 [Netpoll][Netpoll]
构建 RPC Client 和 Server。

更多信息请参阅 [文档](#文档)
Expand Down Expand Up @@ -51,46 +51,17 @@ goroutine,大幅增加调度开销。此外,[net.Conn][net.Conn] 没有提

# 性能

性能测试并非数字游戏,首先应满足工业级使用要求,在 RPC 场景下,并发请求、等待超时是必要的支持项。
性能测试应满足工业级使用要求,在 RPC 场景下,并发请求、等待超时是必要的支持项。

因此我们设定,性能测试应满足如下条件:
我们提供了 [netpoll-benchmark][netpoll-benchmark] 项目用来长期追踪和比较 [Netpoll][Netpoll] 与其他框架在不同情况下的性能数据以供参考。

1. 支持并发请求, 支持超时(1s)
2. 使用协议: header(4 byte) 表明总长
更多测试参考 [kitex-benchmark][kitex-benchmark][hertz-benchmark][hertz-benchmark] (即将开源)

对比项目为 [net][net], [evio][evio]
, [gnet][gnet] ,我们通过 [测试代码][Benchmarks] 比较了它们的性能。

更多测试参考 [Netpoll-Benchmark][Netpoll-Benchmark]
, [KiteX-Benchmark][KiteX-Benchmark][Hertz-Benchmark][Hertz-Benchmark]

### 测试环境

* CPU: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, 4 cores
* Memory: 8GB
* OS: Debian 5.4.56.bsk.1-amd64 x86_64 GNU/Linux
* Go: 1.15.4

### 并发表现 (echo 1KB)

![image](docs/images/c_tp99.png)
![image](docs/images/c_qps.png)

### 传输表现 (并发 100)

![image](docs/images/s_tp99.png)
![image](docs/images/s_qps.png)

### 测试结论

相比 [net][net][Netpoll][Netpoll] 延迟约 34%,qps
约 110%(继续加压 net 延迟过高,数据失真)

# 文档
# 参考

* [官方网站](https://www.cloudwego.io)
* [使用文档](docs/guide/guide_cn.md)
* [设计文档](docs/reference/design_cn.md)
* [Change Log](docs/reference/change_log.md)
* [DATA RACE 说明](docs/reference/explain.md)

[Netpoll]: https://github.com/cloudwego/netpoll
Expand All @@ -102,10 +73,9 @@ goroutine,大幅增加调度开销。此外,[net.Conn][net.Conn] 没有提
[KiteX]: https://github.com/cloudwego/kitex
[Hertz]: https://github.com/cloudwego/hertz

[Benchmarks]: https://github.com/cloudwego/netpoll-benchmark
[Netpoll-Benchmark]: https://github.com/cloudwego/netpoll-benchmark
[KiteX-Benchmark]: https://github.com/cloudwego/kitex
[Hertz-Benchmark]: https://github.com/cloudwego/hertz
[netpoll-benchmark]: https://github.com/cloudwego/netpoll-benchmark
[kitex-benchmark]: https://github.com/cloudwego/kitex
[hertz-benchmark]: https://github.com/cloudwego/hertz

[ByteDance]: https://www.bytedance.com
[Redis]: https://redis.io
Expand Down
66 changes: 44 additions & 22 deletions connection_impl.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,8 @@ type connection struct {
inputBarrier *barrier
outputBarrier *barrier
supportZeroCopy bool
maxSize int // The maximum size of data between two Release().
bookSize int // The size of data that can be read at once.
}

var _ Connection = &connection{}
Expand Down Expand Up @@ -106,6 +108,18 @@ func (c *connection) Skip(n int) (err error) {

// Release implements Connection.
func (c *connection) Release() (err error) {
// Check inputBuffer length first to reduce contention in mux situation.
if c.inputBuffer.Len() == 0 && c.lock(reading) {
// Double check length to calculate the maxSize
if c.inputBuffer.Len() == 0 {
maxSize := c.inputBuffer.calcMaxSize()
if maxSize > c.maxSize {
c.maxSize = maxSize
}
c.inputBuffer.resetTail(c.maxSize)
}
c.unlock(reading)
}
return c.inputBuffer.Release()
}

Expand Down Expand Up @@ -165,12 +179,12 @@ func (c *connection) MallocLen() (length int) {
// If empty, it will call syscall.Write to send data directly,
// otherwise the buffer will be sent asynchronously by the epoll trigger.
func (c *connection) Flush() error {
if c.IsActive() && c.lock(outputBuffer) {
c.outputBuffer.Flush()
c.unlock(outputBuffer)
return c.flush()
if !c.lock(flushing) {
return Exception(ErrConnClosed, "when flush")
}
return Exception(ErrConnClosed, "when flush")
defer c.unlock(flushing)
c.outputBuffer.Flush()
return c.flush()
}

// MallocAck implements Connection.
Expand All @@ -179,7 +193,7 @@ func (c *connection) MallocAck(n int) (err error) {
}

// Append implements Connection.
func (c *connection) Append(w Writer) (n int, err error) {
func (c *connection) Append(w Writer) (err error) {
return c.outputBuffer.Append(w)
}

Expand Down Expand Up @@ -260,6 +274,7 @@ func (c *connection) init(conn Conn, prepare OnPrepare) (err error) {
// init buffer, barrier, finalizer
c.readTrigger = make(chan struct{}, 1)
c.writeTrigger = make(chan error, 1)
c.bookSize, c.maxSize = block1k/2, pagesize
c.inputBuffer, c.outputBuffer = NewLinkBuffer(pagesize), NewLinkBuffer()
c.inputBarrier, c.outputBarrier = barrierPool.Get().(*barrier), barrierPool.Get().(*barrier)
c.setFinalizer()
Expand Down Expand Up @@ -304,6 +319,7 @@ func (c *connection) initFDOperator() {

func (c *connection) setFinalizer() {
c.AddCloseCallback(func(connection Connection) error {
c.stop(flushing)
c.netFD.Close()
c.closeBuffer()
freeop(c.operator)
Expand All @@ -327,11 +343,10 @@ func (c *connection) triggerWrite(err error) {

// waitRead will wait full n bytes.
func (c *connection) waitRead(n int) (err error) {
leftover := n - c.inputBuffer.Len()
if leftover <= 0 {
if n <= c.inputBuffer.Len() {
return nil
}
atomic.StoreInt32(&c.waitReadSize, int32(leftover))
atomic.StoreInt32(&c.waitReadSize, int32(n))
defer atomic.StoreInt32(&c.waitReadSize, 0)
if c.readTimeout > 0 {
return c.waitReadWithTimeout(n)
Expand Down Expand Up @@ -359,24 +374,31 @@ func (c *connection) waitReadWithTimeout(n int) (err error) {
} else {
c.readTimer.Reset(c.readTimeout)
}

for c.inputBuffer.Len() < n {
if c.IsActive() {
select {
case <-c.readTimer.C:
return Exception(ErrReadTimeout, c.readTimeout.String())
case <-c.readTrigger:
continue
if !c.IsActive() {
// cannot return directly, stop timer before !
// confirm that fd is still valid.
if atomic.LoadUint32(&c.netFD.closed) == 0 {
err = c.fill(n)
} else {
err = Exception(ErrConnClosed, "wait read")
}
break
}
// cannot return directly, stop timer before !
// confirm that fd is still valid.
if atomic.LoadUint32(&c.netFD.closed) == 0 {
err = c.fill(n)
} else {
err = Exception(ErrConnClosed, "wait read")

select {
case <-c.readTimer.C:
// double check if there is enough data to be read
if c.inputBuffer.Len() >= n {
return nil
}
return Exception(ErrReadTimeout, c.remoteAddr.String())
case <-c.readTrigger:
continue
}
break
}

// clean timer.C
if !c.readTimer.Stop() {
<-c.readTimer.C
Expand Down
19 changes: 16 additions & 3 deletions connection_lock.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,25 @@ const (

type key int32

/* State Diagram
+--------------+ +--------------+
| processing |-------->| flushing |
+-------+------+ +-------+------+
|
| +--------------+
+--------------->| closing |
+--------------+
- "processing" locks onRequest handler, and doesn't exist in dialer.
- "flushing" locks outputBuffer
- "closing" should wait for flushing finished and call the closeCallback after that.
*/

const (
closing key = iota
processing
writing
inputBuffer
outputBuffer
flushing
reading
// total must be at the bottom.
total
)
Expand Down
Loading

0 comments on commit 68a9c4c

Please sign in to comment.