[ Get-Started | Features | Use Cases | Motivation | Demos | Performance | Compare | About ]
[ API Overview | Generate & Submit Task | Track Status & Examine Responses | Configurations ]
Parallec is a fast parallel async HTTP/SSH/TCP/Ping client java library. Scalably aggregate and handle API responses anyway and send it anywhere by writing 20 lines of code. A special super convenient response context let you pass in/out any object when handling the responses. Now you can conduct scalable API calls, then effortlessly pass aggregated data anywhere to elastic search, kafka, MongoDB, graphite, memcached, etc. Parallec means Parallel Client, and is pronounced as "Para-like".
Donwload the latest JAR or grab from Maven:
<dependency>
<groupId>io.parallec</groupId>
<artifactId>parallec-core</artifactId>
<version>0.9.0</version>
</dependency>
or Gradle:
compile 'io.parallec:parallec-core:0.9.0'
6 Line Example
In the example below, simply changing prepareHttpGet() to prepareSsh(), prepareTcp(), preparePing() enables you to conduct parallel SSH/TCP/Ping. Details please refer to the Java Doc and Example Code.
import io.parallec.core.*;
import java.util.Map;
ParallelClient pc = new ParallelClient();
pc.prepareHttpGet("").setTargetHostsFromString("www.google.com www.ebay.com www.yahoo.com")
.execute(new ParallecResponseHandler() {
public void onCompleted(ResponseOnSingleTask res,
Map<String, Object> responseContext) {
System.out.println( res.toString() ); }
});
20 Line Example
Now that you have learned the basics, check out how easy to pass an elastic search client using the convenient response context to aggregate data anywhere you like.
ParallelClient pc = new ParallelClient();
org.elasticsearch.node.Node node = nodeBuilder().node(); //elastic client initialize
HashMap<String, Object> responseContext = new HashMap<String, Object>();
responseContext.put("Client", node.client());
pc.prepareHttpGet("")
.setConcurrency(1000).setResponseContext(responseContext)
.setTargetHostsFromLineByLineText("http://www.parallec.io/userdata/sample_target_hosts_top100_old.txt", HostsSourceType.URL)
.execute( new ParallecResponseHandler() {
public void onCompleted(ResponseOnSingleTask res,
Map<String, Object> responseContext) {
Map<String, Object> metricMap = new HashMap<String, Object>();
metricMap.put("StatusCode", res.getStatusCode().replaceAll(" ", "_"));
metricMap.put("LastUpdated",PcDateUtils.getNowDateTimeStrStandard());
metricMap.put("NodeGroupType", "Web100");
Client client = (Client) responseContext.get("Client");
client.prepareIndex("local", "parallec", res.getHost()).setSource(metricMap).execute();
}
});
node.close(); pc.releaseExternalResources();
Different Requests to the Same Target
Now see how easy to use the request template to send multiple different requests to the same target. Read more..
pc.prepareHttpGet("/userdata/sample_weather_$ZIP.txt")
.setReplaceVarMapToSingleTargetSingleVar("ZIP",
Arrays.asList("95037","48824"), "www.parallec.io")
.execute(new ParallecResponseHandler() {...}...
- http://www.parallec.io/userdata/sample_weather_48824.txt
- http://www.parallec.io/userdata/sample_weather_95037.txt
###More Readings
- More Examples on setting context, send to elastic search, async running, auto progress polling, track progress, TCP/SSH/Ping.
- Set Target Hosts from list, string, line by line text, json path, from local or remote URLs.
- Full Documentation
- Javadoc
- Ping Demo Ping 8000 Servers within 11.1 Seconds, performance test vs. FPing.
###Use Cases
- Scalable web server monitoring, management, and configuration push, ping check.
- Asset / server status discovery, remote task execution in agent-less(parallel SSH) or agent based (parallel HTTP/TCP) method.
- Scalable API aggregation and processing with flexible destination with your favorate message queue / storage / alert engine.
- Orchestration and work flows on multiple web servers.
- Parallel different requests with controlled concurrency to a single server.
Parallec is built on Akka actors and Async HTTP Client / Netty / Jsch. The library focuses on HTTP while also enables scalable communication over SSH/Ping/TCP.
90%+ Test coverage assures you always find an example of each of feature.
- Exceedingly intuitive interface with builder pattern similar to that in Async HTTP Client, but handles concurrency behind the scenes.
- Generic response handler with context. Special response context enables total freedom and convenience of processing each response your way. Process and aggregate data anywhere to Kafka, Redis, Elastic Search, mongoDB, CMS and etc.
- Flexible on when to invoke the handler: before (in worker thread) or after the aggregation (in master/manager thread).
- Flexible Input of target hosts: Input target hosts from a list, string, JSON Path from local files or a remote URL
- Scalable and fast, infinitely scalable with built-in Concurrency control.
- Auto-progress polling to enable task level concurrency with Async API for long jobs and orchestrations.
- Request template to handle non-uniform requests.
- Convenient single place handling success and failure cases. Handle in a single function where you can get the response including the actual response if success; or stacktrace and error details if failures.
- Capacity aware task scheduler helps you to auto queue up and fire tasks when capacity is insufficient. (e.g. submit consecutively 5 tasks each hitting 100K websites with default concurrency will result in a queue up)
- Fine-grained task progress tracking helps you track the the progress each individual task status. Of a parallel task on 1000 target hosts, you may check status on any single host task, and percentage progress on how many are completed.
- Fine-grained task cancelation on whole/individual request level. Of a parallel task on 1000 target hosts, you may cancel a subset of target hosts or cancel the whole parallel task anytime.
- Status-code-aggregation is provided out of the box.
- Parallel Ping supports both InetAddress.reachable ICMP (requires root) and Process based ping with retries. Performance testing shows it is ** 2x the speed of** than best-effort tuned FPing on pinging on 1500 targets. (2.2 vs 4.5 sec)
- Parallel SSH supports both key and password based login and task cancellation.
- Parallel TCP supports idle timeout based channel closes.
##Motivation
- Flexible response handling and immediate processing embedded in other applications.
- Handle async APIs with auto progress polling for task level concurrency control.
- Support of other protocols, and more..
With the feedbacks, lessons, and improvements from the past year of internal usage and open source of REST Commander, we now made the core of REST Commander as an easy to use standalone library. We added 15+ new features, rewritten 70%+ of the code, with 90%+ test coverage for confident usage and contribution. This time we also structure it better so that most internal development can be directly made here.
[Watch Demo](https://www.youtube.com/watch?v=QcavegPMDms"Parallec demo - Click to Watch!"): Parallec Aggregates 100 websites status to elastic search and visualized with 20 lines of code.
[Watch Ping Demo](https://www.youtube.com/watch?v=9m1TFuO1Mys"Parallec Ping vs FPing demo - Click to Watch!"): Parallec is 2x Speed of best-efforted tuned FPing with same accurate results and pings 8000 servers within 11.1 seconds, details check here.
Note that speed varies based on network speed, API response time, the slowest servers, and concurrency settings.
#####HTTP
We conducted remote task execution API on 3,000 servers with response aggregated to elastic search, visualized within 15 seconds, by writing 25 lines of code.
With another faster API, calls to 8,000 servers in the same datacenter with response aggregated in memory in 12 seconds.
#####Ping Parallec 2.2 seconds vs FPing 4.5 seconds on 1500 servers. Parallec is 2x the speed of FPing (after best-effort tuning : -i 1 -r 0 v3.12) of pinging 1500 servers while getting the same ping results. Parallec pings 8000 servers within 11.1 seconds with breeze.
As usual, don't rely on these numbers and perform your own benchmarks.
- Compared with java thread pool based solution, parallec gives you worry free concurrency control without constraints on thread size.
- Compared with single-threaded Node.js solutions, Parallec enables parallel computation-intensive response handling with multiple-cores.
- Similar issues with Python's global interpreter lock, and to use multiple CPU you will need to use costly multi-process. These are more suited for I/O only but no cpu intensive response processing.
In Parallec, you may handle response either in Worker (before aggregation: in parallel) or in Manager (after aggregation: single thread). Read More..
Features | Parallec | REST Commander | Thread Pools + Async Client |
---|---|---|---|
Embedded library with intuitive builder pattern interface | Yes | No | No |
Ready to use application with GUI wizard based request submission and response aggregation | No | Yes | No |
Simple concurrency control not limited by thread size | Yes | Yes | No |
Immediate response handler without waiting all response return | Yes | No | Yes |
Capacity aware task scheduler and global capacity control | Yes | No | No |
Total freedom of response processing and API aggregation: Pluggable and generic response handler and response context | Yes | No | No* |
1 line plugin to enable SSL Client auth | Yes | No | No |
90% Test Coverage | Yes | No | No |
Load target hosts from CMS query, JSON Path, text, list, string from URL/local | Yes | No | No |
Task level concurrency and orchestration for Async APIs: auto polling task progress | Yes | No | No |
Task level configuration on timeout and replacing Async HTTP Client | Yes | No | No |
Async and sync task control with progress polling and cancellation | Yes | No | No |
Scalable Parallel SSH with password and key based login | Yes | No | No |
Proven scalability and speed on 100,000+ target hosts in Production environment | Yes | Yes | No |
Generic request template with variable replacement for sending different requests to same/different target hosts | Yes | Yes | No |
Scalable Ping with Retries | Yes | No | No |
Scalable TCP with idle timeout | Yes | No | No |
Flexible handler location at either worker (in parallel) or manager thread | Yes | No | No |
Out-of-the-box two-level response aggregation on status code | Yes | Yes | No |
Configurable response log trimming on intervals | Yes | No | No |
Cancel task on a list of target hosts | Yes | No | No |
Your name here
Parallec is served to you by Yuanteng (Jeff) Pei and Teng Song, Cloud Infrastructure & Platform Services (CIPS) at eBay Inc.
- We thanks our manager Andy Santosa, project manager Marco Rotelli, Cloud Infrastructure & Platform Services (CIPS) and legal for the big support on this project and the open source effort.
- The auto-progress polling is inspired by lightflow.
- We thank openpojo and the author Osman Shoukry for his help on making the openpojo more accessible for us to use in our project.
- We thank AsyncHttpClient and Stephane Landelle for guidance.
##Contributions
Any helpful feedback is more than welcome. This includes feature requests, bug reports, pull requests, constructive feedback, and etc.
Code licensed under Apache License v2.0
© 2015 eBay Software Foundation