-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition Helm release and chart hosting ownership to CNCF #114
Comments
/cc |
Is there a way to transfer buckets from one account to another? I think that would be a perfectly simple solution. We basically just have to keep these running until Nov. 2020, as those repos are already deprecated, and Nov 2020 is our sunset date. |
To add some context...
These are all just details to provide context. @bgrant0607 is there any way to move a Google Cloud bucket from one project to another? This would speed up the process for making any changes. |
@mattfarina I think that moving the bucket is not possible, at least the Google Cloud documentation tell us the following:
The data itself can be easily transferred - https://console.cloud.google.com/transfer/cloud; but not sure if the links will be preserved in this case. |
@technosophos @mattfarina @idvoretskyi What I'm looking into is transferring the whole kubernetes-helm and kubernets-charts GCP projects to a new GCP org: https://cloud.google.com/resource-manager/docs/project-migration AIUI, that would preserve the project names and bucket names. Currently they are in the Google GCP org. We would need to create a CNCF-owned GCP org and billing account. |
Also, usage of the charts bucket in particular continues to grow exponentially. Have charts been moved to a new location? |
@bgrant0607 This is due, I believe, to the overall growth in Helm use which has skyrocketed. I was even surprised when comparing numbers from when Helm joined the CNCF as a sister project to Kubernetes up to now (when I submitted the graduation proposal). The Charts repo is being deprecated and we are asking maintainers of charts to host their own. We have instructions and tools to help them. This also means Helm gets out of their way on process to make changes. On Nov 13th we are turning off the stable and incubator repos. Prior to that we are removing them from the Helm Hub. The exponential growth isn't just on hosting but on maintaining the repository. This is the reason for the change in strategy. |
@bgrant0607 The first step is for Helm to get a funded GCP account to transfer to. I've started to kickoff the process to work that out. |
Thanks. Putting these into their own account/org also should make it easier for the Helm project to manage security settings, who has access, and so on. |
@bgrant0607 this problem looks like it's going away in November, we are happy to move to a community managed account, can Google fund this until then which basically is a NOOP at November when the Helm community says they are shutting down the charts repo/hub |
@caniszczyk When this was discussed in April 2019, the estimate I was given was that the turndown would happen by the end of 2019. Given the current rate of growth of usage of the charts bucket, I am concerned about whether a hard cutoff in November 2020 will prove viable. |
@bgrant0607 can you further explain what you mean by...
I'm not quite sure what you mean. Just want to make sure we're on the same page. |
@mattfarina Let's say usage of the charts buckets grows another 60-70% between now and November. Would you still be comfortable just deleting it? |
@bgrant0607 Yes. Whole heatedly yes. The reason for that simple. It is a maintenance nightmare and most of all on the maintainers. The reason we started the Helm Hub, CI tooling for those who host themselves, new ways to host charts (OCI registries for example), and a push away from the charts repository we host is because it's a nightmare for maintainers. We have had maintainer burnout cycles. We tried other things that lessened the burden but they did not remove it. And the burden grows. For the sake of the maintainers we are moving to a model where charts maintainer can focus on tooling and practices while people can self host. Helm v3 even provides a means to search the Helm Hub instead of the charts repo. Popularity is fantastic. But, the cost is maintainer burnout. So, yes... I'm happy to give that up for the new distributed model. |
+1 on everything @mattfarina said. We have also been broadcasting this information to the broader helm community in dev calls, on slack, in mailing lists, documentation, github issues, blog posts... We've thrown this message at nearly every social medium. We've been informing the community that the stable repository will be shut down in November for quite some time now, so this shouldn't be catching anyone by surprise. |
@bacongobbler @mattfarina The deprecation timeline says "At 1 year, support for this project will formally end, and this repo will be marked obsolete". It should also say that gs://kubernetes-charts and gs://kubernetes-charts-incubator will be deleted, so these chart repositories will no longer be usable. What will happen to the github repo? Will it be archived? |
Good question. As per helm's governance on maintainer structure, it is my understanding that the question of archiving the repo falls on the @helm/charts-maintainers to make that call. The only time org maintainers step in is when the project has no maintainers, or if there's a conflict between multiple maintainer groups. Is there a particular argument for or against archiving the repository here, @bgrant0607? |
@bacongobbler I have no preference for what happens to the git repo. I was just asking. |
I think that can be archived in a read-only option, so if someone still needs any chart from that repo they will be able to get it. |
yes, it should be archived in GitHub with read-only option. |
@bgrant0607 That's a good clarification that we should make on the timeline. I've said that in that past but I know that some have not realized it. Thanks for pointing it out. @cpanato I would suggest we add details to the readme and archive the repo. This will cause it to still be accessible but no longer in use @rimusz thanks for pointing out ChartCenter on this. I was not aware this would happen. That's good to know. Do you have a full cache of all versions? |
@mattfarina yes, we cached what ever was available in the |
FYI, I just noticed that the Kubernetes service catalog repo is hosted in the same Google Cloud account. I filed an issue with the project notifying them and providing some paths to handle this. |
Is it expected that updates to kubernetes-charts and kubernetes-charts-incubator are still occurring (e.g., today)? |
It looks like the last updates to helm releases (gs://kubernetes-helm) and to gs://artifacts.kubernetes-helm.appspot.com were last month. |
@bgrant0607 |
@scottrigby Given that this location will completely abruptly vanish in a month, does it make sense to continue updating items in the bucket? It doesn't seem to be encouraging maintainers or consumers to move. |
It looks like there are still docs that need to be updated. For example: Or, if you need the empty fantastic-charts bucket as an example, we can delete it in the current project and you can re-create it in another. |
The usage of the charts storage buckets hasn't decreased at all. It's higher than when the deprecation was announced last October. There have been about 3.5 accesses per second on average over the last 12 hours. |
That is pretty concerning. Started a thread with the rest of the TOC to discuss options: https://lists.cncf.io/g/cncf-toc/message/5407 |
It would be nice to know some details -- any details, really -- about this traffic. None of us has access to traffic data. For example, are we looking at CI/CD systems as the number one cause of traffic? Do we know what files are being accessed most frequently? Are the same clients frequently polling, or do we see a pretty much uniform distribution of client IPs accessing the data? We don't have any actionable data that would, for example, allow us to optimized some particular piece of code, package, or resource. |
@bgrant0607 I know there are parallel convos happening on this now, but want to quickly reply to your questions: Upload usage:
Yes we initially anticipated a decrease in stable and incubator chart changes (as we've announced this widely for over a year now, in addition to a prominent deprecation timeline at the top of the charts README). But instead we've seen an increase as we count down toward relocating the charts source code (development, and releasing new versions) to new, distributed chart repos. I don't think we should change the timeline we've promised users though. The issue for to track relocating chart source, development and releasing new versions is here: https://github.com/helm/charts/issues/21103 Download usage: I've summarized options discussed so far for relocating chart package history here: helm/charts#23850 This is really what end users care about. And the downloads account for the vast majority of storage costs, by far. |
Unfortunately we just found out the GCP buckets are not yet fronted by Cloud CDN (perhaps it's fortunate in a way, because it would almost certainly reduce the current total costs. Vic is not yet sure by how much but we can explore that). But yes, as it has been set up so far we have no way to get stats on individual file downloads, we only have total download related stats. |
AIUI, setting up CDN would involve changing the URL, and if we could do that, the URL could be redirected anywhere, such as chartcenter.io. https://cloud.google.com/cdn/docs/setting-up-cdn-with-bucket |
@technosophos I believe it should be possible to grant access to log data. It looks like @hh has these permissions: @hh have you analyzed usage of specific charts? |
At the risk of asking something odd... why does someone unaffiliated with the Helm project have access to this data when we do not have that access and were not notified that a 3rd party was given access? |
I asked @mattfarina back in October of 2018 as we were trying to understand which helm charts were popular. Some of our raw explorations from that time are available here: https://github.com/cncf/apisnoop/blob/2278eab54269024e91f150feb12f643364c57fe3/dev/helm-charts/bigquery-notes.md#sec-1 The intention was to run these charts within an audit-logged cluster + APISnoop and focus on writing test for untested APIs hit by popular k8s applications. Vic Iglesias was the person who granted me (and a couple other ii.coop folks) access.
I think he also created a custom role for:
Unfortunately the log configuration at the time didn’t have enough information for us to gain any insight. At the time the rollups didn't quite add up, we were running queries like this: select distinct protopayload_auditlog.resourceName,
count(protopayload_auditlog.resourceName) as count
from kubernetes_charts_data_access.cloudaudit_googleapis_com_data_access_20181023
where protopayload_auditlog.resourceName LIKE "%.tgz"
group by protopayload_auditlog.resourceName
order by count DESC I was able to get useful data out of my own buckets in this way, but for some reason the logging on the helm charts produced very different results. SELECT cs_uri, count(cs_uri) AS count
FROM storageanalysis.usage
WHERE cs_uri like "%tgz"
GROUP BY cs_uri
ORDER BY count DESC We tried to remedy this with a new sink:
Hopefully this provides some clarity and insight into what we were up to ~2 years ago. |
Now that it's out... GitHub will be hosting the stable and incubator repos. This doesn't impact the deprecation timeline for the charts repo and updates to the charts in it. Details on where to access them in the new location will come out soon once everything is setup and tested. This will coincide with the release of Helm 2.17 and 3.4. |
https://helm.sh/blog/charts-repo-deprecation/ https://helm.sh/blog/helm-2-becomes-unsupported/ I think we can close out this ticket now after passing the deprecation deadline. Thanks everyone. |
Google Cloud has been hosting Helm releases and charts, which reside in GCS storage buckets.
Much like Kubernetes project infrastructure (https://github.com/kubernetes/community/tree/master/wg-k8s-infra), these resources should be transferred to community/CNCF ownership and management.
The text was updated successfully, but these errors were encountered: