-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virtual cluster wide topic id cache #11
Changes from all commits
7363428
0b54d75
a7ad3fc
224f288
e130c3a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
package io.strimzi.kafka.topicenc.kroxylicious; | ||
|
||
import java.util.concurrent.CompletableFuture; | ||
import java.util.concurrent.Executor; | ||
|
||
import org.apache.kafka.common.Uuid; | ||
|
||
import com.github.benmanes.caffeine.cache.AsyncCacheLoader; | ||
|
||
public class ContextCacheLoader implements AsyncCacheLoader<Uuid, String> { | ||
|
||
@Override | ||
public CompletableFuture<? extends String> asyncLoad(Uuid key, Executor executor) throws Exception { | ||
return null; | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
package io.strimzi.kafka.topicenc.kroxylicious; | ||
|
||
import java.time.Duration; | ||
import java.util.List; | ||
import java.util.Objects; | ||
import java.util.Set; | ||
import java.util.concurrent.CompletableFuture; | ||
|
||
import org.apache.kafka.common.Uuid; | ||
import org.apache.kafka.common.message.MetadataResponseData; | ||
import org.apache.kafka.common.requests.MetadataRequest; | ||
|
||
import com.github.benmanes.caffeine.cache.AsyncCache; | ||
import com.github.benmanes.caffeine.cache.Caffeine; | ||
|
||
import io.kroxylicious.proxy.filter.KrpcFilterContext; | ||
|
||
public class TopicIdCache { | ||
private final AsyncCache<Uuid, String> topicNamesById; | ||
|
||
public TopicIdCache() { | ||
this(Caffeine.newBuilder().expireAfterAccess(Duration.ofMinutes(10)).buildAsync()); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Any reason for expiring? Is this to keep only relevant/used mappings cached? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah. Can't say I gave it too much thought. I wanted to avoid the data going stale/leaking in case the proxy missed a metadata update which deleted a topic. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suppose if a kafka use-case used short lived topics, then this would be a concern There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it should probably be a bounded cache as well, given we are going to have 1 per VirtualCluster. |
||
} | ||
|
||
TopicIdCache(AsyncCache<Uuid, String> topicNamesById) { | ||
this.topicNamesById = topicNamesById; | ||
} | ||
|
||
/** | ||
* Exposes a future to avoid multiple clients triggering metadata requests for the same topicId. | ||
* @param topicId to convert to a name | ||
* @return the Future which will be completed when the topic name is resolved or <code>null</code> if the topic is not known (and is not currently being resolved) | ||
*/ | ||
public CompletableFuture<String> getTopicName(Uuid topicId) { | ||
return topicNamesById.getIfPresent(topicId); | ||
} | ||
|
||
public boolean hasResolvedTopic(Uuid topicId) { | ||
final CompletableFuture<String> topicNameFuture = topicNamesById.getIfPresent(topicId); | ||
//Caffeine converts failed or cancelled futures to null internally, so we don't have to handle them explicitly | ||
return topicNameFuture != null && topicNameFuture.isDone(); | ||
} | ||
|
||
public void resolveTopicNames(KrpcFilterContext context, Set<Uuid> topicIdsToResolve) { | ||
final MetadataRequest.Builder builder = new MetadataRequest.Builder(List.copyOf(topicIdsToResolve)); | ||
final MetadataRequest metadataRequest = builder.build(builder.latestAllowedVersion()); | ||
topicIdsToResolve.forEach(uuid -> topicNamesById.put(uuid, new CompletableFuture<>())); | ||
context.<MetadataResponseData> sendRequest(metadataRequest.version(), metadataRequest.data()) | ||
.whenComplete((metadataResponseData, throwable) -> { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What path is followed when topicId is not found? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. None yet 😁 As I haven't spun up a real cluster to work out what that would look like (this is the sort of reason its still a draft PR). I suspect it will need to fail the future or even just complete it with null and let it get re-queried. |
||
if (throwable != null) { | ||
//TODO something sensible | ||
} | ||
else { | ||
metadataResponseData.topics() | ||
.forEach(metadataResponseTopic -> Objects.requireNonNull(topicNamesById.getIfPresent(metadataResponseTopic.topicId())) | ||
.complete(metadataResponseTopic.name())); | ||
//If we were to get null from getIfPresent it would imply we got a result for a topic we didn't expect | ||
} | ||
}); | ||
} | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the intent here that the config will be given access to the VirtualCluster name, or its UID?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha you spotted my deliberate fudge. I'm currently working on https://github.com/sambarker/kroxylicious/tree/name_that_cluster I my current suspicion is it will need to be name based as we are leaning towards relaxed restrictions on clusterID's.