-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming tests added #139
Conversation
@@ -7,7 +7,7 @@ const alg = { | |||
}, | |||
"name": "statefull-time-statistics-tst", | |||
"entryPoint": "statefullGetSendTime.py", | |||
"cpu": 0.03, | |||
"cpu": 0.1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So little cpu might cause problems producing messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's true, but only when we expect high message rates.
In the tests where we expect this, I’ve replaced the stateful CPU with the createAlg method:
const createAlg = async (alg, cpu) => {
await deleteAlgorithm(alg.name, true, true)
if (cpu) {
alg.cpu = cpu;
}
await storeAlgorithms(alg);
}
Would you prefer we increase the CPU allocation for every test's stateful configuration? As it stands, it is only modified when needed, with 0.1 being the default value.
"kind": "stream", | ||
"flowInput": { | ||
"process_time": 0.02, | ||
"flows": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both flows have the save rate. The idea was to test different rates from each node
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a template, in the test itself the program is being initialised:
it("should satisfy the request rate of 2 statefuls, each with different rate", async () => {
await createAlg(statefull);
algList.push(statefull.name);
await createAlg(stateless);
algList.push(stateless.name);
const flow1Config = {
programs: [
{ rate: 120, time: 50 }
]
};
const flow2Config = {
flowName: "hkube_desc2",
programs: [
{ rate: 60, time: 50 }
]
};
streamDifferentFlows.flowInput = combineFlows([flow1Config, flow2Config]);
...
Updated the template itself for clarity.
Related to this PR: kube-HPC/hkube#2009
which is related to this issue: kube-HPC/hkube#2002
Added end to end tests for new streaming scaling logic.
Warning - Do not merge until the PR is changed.