Skip to content

Commit

Permalink
Merge pull request github#35219 from github/repo-sync
Browse files Browse the repository at this point in the history
Repo sync
  • Loading branch information
docs-bot authored Nov 7, 2024
2 parents aec2b02 + 1a99ce6 commit 1b5e3de
Show file tree
Hide file tree
Showing 122 changed files with 4,477 additions and 2,463 deletions.
23 changes: 23 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# This file is a template for what your untracked .env file might look like for local development.
# Please copy this to a new .env file and fill in the values as needed.

# Requires a running local Elasticsearch service. Can be started via Docker, see https://github.com/github/docs-engineering/blob/main/docs/elasticsearch/elasticsearch-locally.md
# When this value is unset searches will be proxied to the production Elasticsearch endpoint
ELASTICSEARCH_URL=http://localhost:9200

# Set for sending events in local development. See https://github.com/github/docs-engineering/blob/main/docs/analytics/hydro-mock.md
HYDRO_ENDPOINT=
HYDRO_SECRET=

# Localization variables
# See https://github.com/github/docs-internal/tree/main/src/languages#working-with-translated-content-locally
ENABLED_LANGUAGES=
TRANSLATIONS_ROOT=

# For running the src/search/scripts/scrape script
# You may want a lower value depending on your CPU
BUILD_RECORDS_MAX_CONCURRENT=100
BUILD_RECORDS_MIN_TIME=

# Set to true to enable the /fastly-cache-test route for debugging Fastly headers
ENABLE_FASTLY_TESTING=
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: Index autocomplete Elasticsearch
name: Index autocomplete search in Elasticsearch

# **What it does**: Indexes autocomplete data into Elasticsearch.
# **Why we have it**: So we can power the API for autocomplete.
# **What it does**: Indexes autocomplete data (general and AI search) into Elasticsearch.
# **Why we have it**: So we can power the APIs for autocomplete.
# **Who does it impact**: docs-engineering

on:
Expand All @@ -10,7 +10,7 @@ on:
- cron: '20 16 * * *' # Run every day at 16:20 UTC / 8:20 PST
pull_request:
paths:
- .github/workflows/index-autocomplete-elasticsearch.yml
- .github/workflows/index-autocomplete-search.yml
- 'src/search/scripts/index/**'
- 'package*.json'

Expand Down Expand Up @@ -40,10 +40,15 @@ jobs:
if: ${{ github.event_name == 'pull_request' }}
run: curl --fail --retry-connrefused --retry 5 -I http://localhost:9200

- name: Run indexing
- name: Run general auto-complete indexing
env:
ELASTICSEARCH_URL: ${{ github.event_name == 'pull_request' && 'http://localhost:9200' || secrets.ELASTICSEARCH_URL }}
run: npm run index -- autocomplete docs-internal-data
run: npm run index-general-autocomplete -- docs-internal-data

- name: Run AI search auto-complete indexing
env:
ELASTICSEARCH_URL: ${{ github.event_name == 'pull_request' && 'http://localhost:9200' || secrets.ELASTICSEARCH_URL }}
run: npm run index-ai-search-autocomplete -- docs-internal-data

- uses: ./.github/actions/slack-alert
if: ${{ failure() && github.event_name == 'schedule' }}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Sync search - PR
name: Index general search in Elasticsearch on PR

# **What it does**: This does what `sync-sarch-elasticsearch.yml` does but
# **What it does**: This does what `index-general-search-elasticsearch.yml` does but
# with a localhost Elasticsearch and only for English.
# **Why we have it**: To test that the script works and the popular pages json is valid.
# **Who does it impact**: Docs engineering
Expand All @@ -11,8 +11,8 @@ on:
paths:
- 'src/search/**'
- 'package*.json'
# Ultimately, for debugging this workflow itself
- .github/workflows/sync-search-pr.yml
# For debugging this workflow
- .github/workflows/index-general-search-pr.yml
# Make sure we run this if the composite action changes
- .github/actions/setup-elasticsearch/action.yml

Expand All @@ -25,9 +25,6 @@ concurrency:
cancel-in-progress: true

env:
# Yes, it's hardcoded but it makes all the steps look exactly the same
# as they do in `sync-search-elasticsearch.yml` where it uses
# that `${{ env.ELASTICSEARCH_URL }}`
ELASTICSEARCH_URL: http://localhost:9200
# Since we'll run in NDOE_ENV=production, we need to be explicit that
# we don't want Hydro configured.
Expand Down Expand Up @@ -63,7 +60,7 @@ jobs:
env:
ENABLE_DEV_LOGGING: false
run: |
npm run sync-search-server > /tmp/stdout.log 2> /tmp/stderr.log &
npm run general-search-scrape-server > /tmp/stdout.log 2> /tmp/stderr.log &
# first sleep to give it a chance to start
sleep 6
Expand All @@ -88,15 +85,13 @@ jobs:
# let's just accept an empty string instead.
THROW_ON_EMPTY: false

# The sync-search-index recognizes this env var if you don't
# use the `--docs-internal-data <PATH>` option.
DOCS_INTERNAL_DATA: docs-internal-data

run: |
mkdir /tmp/records
npm run sync-search-indices -- /tmp/records \
npm run general-search-scrape -- /tmp/records \
--language en \
--version dotcom
--version fpt
ls -lh /tmp/records
Expand All @@ -106,9 +101,9 @@ jobs:
- name: Index into Elasticsearch
run: |
npm run index-elasticsearch -- /tmp/records \
npm run index-general-search -- /tmp/records \
--language en \
--version dotcom
--version fpt
- name: Check created indexes and aliases
run: |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Sync search Elasticsearch
name: Index general search in Elasticsearch

# **What it does**: It scrapes the whole site and dumps the records in a
# temp directory. Then it indexes that into Elasticsearch.
Expand Down Expand Up @@ -140,7 +140,7 @@ jobs:
env:
ENABLE_DEV_LOGGING: false
run: |
npm run sync-search-server > /tmp/stdout.log 2> /tmp/stderr.log &
npm run general-search-scrape-server > /tmp/stdout.log 2> /tmp/stderr.log &
# first sleep to give it a chance to start
sleep 6
Expand Down Expand Up @@ -169,13 +169,11 @@ jobs:
# the same as not set within the script.
VERSION: ${{ inputs.version }}

# The sync-search-index recognizes this env var if you don't
# use the `--docs-internal-data <PATH>` option.
DOCS_INTERNAL_DATA: docs-internal-data

run: |
mkdir /tmp/records
npm run sync-search-indices -- /tmp/records \
npm run general-search-scrape -- /tmp/records \
--language ${{ matrix.language }}
ls -lh /tmp/records
Expand All @@ -186,12 +184,12 @@ jobs:
- name: Index into Elasticsearch
env:
# Must match what we used when scraping (npm run sync-search-indices)
# Must match what we used when scraping (npm run general-search-scrape)
# otherwise the script will seek other versions from disk that might
# not exist.
VERSION: ${{ inputs.version }}
run: |
npm run index-elasticsearch -- /tmp/records \
npm run index-general-search -- /tmp/records \
--language ${{ matrix.language }} \
--stagger-seconds 5 \
--retries 5
Expand Down
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,9 @@ assets/images/help/writing/unordered-list-rendered (1).png

# Used by precompute-pageinfo
.pageinfo-cache.json.br

# Cloned and used for indexing Elasticsearch data
docs-internal-data/

# For intermediate data (like scraping for Elasticsearch indexing)
tmp/
Original file line number Diff line number Diff line change
Expand Up @@ -212,3 +212,30 @@ If your appliance averages more than 70% CPU utilization, {% data variables.prod
As part of upgrading GitHub Enterprise Server to version 3.13 or later, the Elasticsearch service will be upgraded. {% data variables.product.company_short %} strongly recommends following the guidance in "[AUTOTITLE](/admin/upgrading-your-instance/performing-an-upgrade/preparing-for-the-elasticsearch-upgrade)."
{% endif %}
{% ifversion ghes > 3.12 and ghes < 3.15 %}
## Undecryptable records
If you are upgrading from {% data variables.product.prodname_ghe_server %} 3.11 or 3.12 to 3.13, or from 3.12 to 3.14, you may run into an issue with undecryptable records due to missing required keys for decryption. The only solution is to delete the undecryptable records. The type of records impacted by this issue are 2FA records, that means you might need to ask users to re-enable two-factor authentication (2FA).
### Before upgrading
If you are upgrading from {% data variables.product.prodname_ghe_server %} 3.11 or 3.12 to 3.13, or from 3.12 to 3.14, you can run the encryption diagnostics script to identify the undecryptable records ahead of time. This will give you the opportunity to understand the impact and plan for it.
1. Download the [encryption diagnostics script](https://gh.io/ghes-encryption-diagnostics). You can use a command like `curl -L -O https://gh.io/ghes-encryption-diagnostics` to download the script.
1. Save the script to the `/data/user/common` directory on the appliance.
1. Follow the instructions at the top of the script and execute it on the appliance. If there are any undecryptable records, they are logged in `/tmp/column_encryption_records_to_be_deleted.log`. Any records logged here means that the system was not able to find the keys for them and hence was not able to decrypt the data in those records.
At this stage, please note that these records will be deleted as part of the process. The script will warn you about the users who will need to re-enroll into 2FA after the upgrade. The impacted users' handles are logged in `/tmp/column_encryption_users_to_have_2fa_disabled.log`. These users will need to be re-enrolled into 2FA.

If the script runs into unexpected issues, you will be prompted to [contact {% data variables.contact.github_support %}](/support/contacting-github-support). Errors related to these issues will be logged in `/tmp/column_encryption_unexpected_errors.log`. If you are in a dire situation and are unable to have users re-enroll into 2FA, [contact {% data variables.contact.github_support %}](/support/contacting-github-support) for help.

### During the upgrade

In case you did not have the opportunity to run the encryption diagnostics script ahead of time, there are mechanisms in the product to help you. The pre-flight checks during the upgrade process will detect undecryptable records and log them in `/tmp/column_encryption_records_to_be_deleted.log`. The sequence will warn you of the users who will need to re-enable 2FA after the upgrade. The impacted users records are logged in `/tmp/column_encryption_users_to_have_2fa_disabled.log`.

If undecryptable records are detected, you will be prompted whether you want to proceed with the upgrade or not. If you proceed, the upgrade process deletes the undecryptable records. Otherwise, the upgrade process will exit.

If you have any questions during the upgrade, you can reach out to {% data variables.contact.github_support %}. Once you have had the time and opportunity to understand the impact, you can retrigger the upgrade.
{% endif %}
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,12 @@ For example, you link your Azure subscription to your organization {% ifversion

* You must know your Azure subscription ID. See [Get subscription and tenant IDs in the Azure portal](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id) in the Microsoft Docs or [contact Azure support](https://azure.microsoft.com/support/).

## Video demonstration of connecting a subscription

To connect an Azure subscription, you'll need appropriate access permissions on both {% data variables.product.product_name %} and the Azure billing portal. This may require coordination between two different people.

To see a demo of the process from beginning to end, see [Billing GitHub consumption through an Azure subscription](https://www.youtube.com/watch?v=Y-f7JKJ4_8Y) on {% data variables.product.company_short %}'s YouTube channel. This video demonstrates the process for an enterprise account. If you're connecting a subscription to an organization account, see "[Connecting your Azure subscription to your organization account](/free-pro-team@latest/billing/managing-the-plan-for-your-github-account/connecting-an-azure-subscription#connecting-your-azure-subscription-to-your-organization-account)."

{% ifversion fpt %}

## Connecting your Azure subscription to your organization account
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Generate end-user query help from .qhelp files.

### Primary Options

#### `<qhelp|mdhelp|query|dir|suite>...`
#### `<qhelpquerysuite>...`

\[Mandatory] Query help files to render. Each argument is one of:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Transcript - "Billing GitHub consumption through an Azure subscription"
intro: Audio and visual transcript.
shortTitle: Billing through Azure
allowTitleToDifferFromFilename: true
product_video: 'https://www.youtube.com/watch?v=DAiIhJKCt8s'
product_video: 'https://www.youtube.com/watch?v=Y-f7JKJ4_8Y'
topics:
- Transcripts
versions:
Expand All @@ -27,7 +27,9 @@ And finally, if a Microsoft customer has an Azure discount, it will automaticall

If a Microsoft customer also has a Microsoft Azure Consumption Commitment, or MACC, all future GitHub consumption will decrement their MACC as well.

So what GitHub products are eligible for Azure billing? Any GitHub consumption products are eligible today, meaning products that customers pay for based on actual usage, including Copilot for Business, GitHub-hosted actions, larger hosted runners, GitHub Packages and storage, and GitHub Codespaces. Please note that GitHub Enterprise and GitHub Advanced Security are currently not able to be billed through Azure, but are instead invoiced on an annual basis.
So what GitHub products are eligible for Azure billing? Any GitHub consumption products are eligible today, meaning products that customers pay for based on actual usage, including things like GitHub Copilot, GitHub-hosted actions, larger hosted runners, GitHub Packages and storage, and GitHub Codespaces.

Historically, GitHub Enterprise and Advanced Security were only available through an annual license. However, as of August 1, 2024, they are now also available for metered billing through Azure, for additional flexibility and pay-as-you-go pricing. For existing licensed customers, be sure to connect with your GitHub seller to learn more, as certain restrictions may apply.

[A table shows eligibility for Azure billing and MACCs for the products mentioned. In the table, all products eligible for Azure billing are also eligible for MACCs.]

Expand Down
2 changes: 2 additions & 0 deletions data/release-notes/enterprise-server/3-10/17.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ sections:
**MEDIUM:** An attacker could steal sensitive information by exploiting a Cross-Site Scripting vulnerability in the repository transfer feature. This exploitation would require social engineering. GitHub has requested CVE ID [CVE-2024-8770](https://www.cve.org/cverecord?id=CVE-2024-8770) for this vulnerability, which was reported via the [GitHub Bug Bounty program](https://bounty.github.com/).
- |
**MEDIUM:** An attacker could push a commit with changes to a workflow using a PAT or OAuth app that lacks the appropriate `workflow` scope by pushing a triple-nested tag pointing at the associated commit. GitHub has requested CVE ID [CVE-2024-8263](https://www.cve.org/cverecord?id=CVE-2024-8263) for this vulnerability, which was reported via the [GitHub Bug Bounty program](https://bounty.github.com/).
- |
**HIGH:** A GitHub App installed in organizations could upgrade some permissions from read to write access without approval from an organization administrator. An attacker would require an account with administrator access to install a malicious GitHub App. GitHub has requested [CVE ID CVE-2024-8810](https://www.cve.org/cverecord?id=CVE-2024-8810) for this vulnerability, which was reported via the [GitHub Bug Bounty Program](https://bounty.github.com/). [Updated: 2024-11-07]
bugs:
- |
For instances deployed on AWS with IMDSv2 enforced, fallback to private IPs was not successful.
Expand Down
34 changes: 34 additions & 0 deletions data/release-notes/enterprise-server/3-10/19.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
date: '2024-11-07'
sections:
security_fixes:
- |
**HIGH**: An attacker could bypass SAML single sign-on (SSO) authentication with the optional encrypted assertions feature, allowing unauthorized provisioning of users and access to the instance, by exploiting an improper verification of cryptographic signatures vulnerability in GitHub Enterprise Server. This is a follow up fix for [CVE-2024-9487](https://www.cve.org/cverecord?id=CVE-2024-9487) to further harden the encrypted assertions feature against this type of attack. Please note that encrypted assertions are not enabled by default. Instances not utilizing SAML SSO, or utilizing SAML SSO authentication without encrypted assertions, are not impacted. Additionally, an attacker would require direct network access as well as a signed SAML response or metadata document to exploit this vulnerability.
known_issues:
- |
Custom firewall rules are removed during the upgrade process.
- |
During the validation phase of a configuration run, a `No such object` error may occur for the Notebook and Viewscreen services. This error can be ignored as the services should still correctly start.
- |
If the root site administrator is locked out of the Management Console after failed login attempts, the account does not unlock automatically after the defined lockout time. Someone with administrative SSH access to the instance must unlock the account using the administrative shell. For more information, see "[AUTOTITLE](/admin/configuration/administering-your-instance-from-the-management-console/troubleshooting-access-to-the-management-console#unlocking-the-root-site-administrator-account)."
- |
The `mbind: Operation not permitted` error in the `/var/log/mysql/mysql.err` file can be ignored. MySQL 8 does not gracefully handle when the `CAP_SYS_NICE` capability isn't required, and outputs an error instead of a warning.
- |
{% data reusables.release-notes.2023-11-aws-system-time %}
- |
On an instance with the HTTP `X-Forwarded-For` header configured for use behind a load balancer, all client IP addresses in the instance's audit log erroneously appear as 127.0.0.1.
- |
{% data reusables.release-notes.2023-10-git-push-made-but-not-registered %}
- |
{% data reusables.release-notes.large-adoc-files-issue %}
- |
{% data reusables.release-notes.2024-01-haproxy-upgrade-causing-increased-errors %}
- |
The `reply.[HOSTNAME]` subdomain is falsely always displaying as having no SSL and DNS record, when testing the domain settings via the Management Console without subdomain isolation.
- |
Admin stats REST API endpoints may timeout on appliances with many users or repositories. Retrying the request until data is returned is advised.
- |
{% data reusables.release-notes.2024-06-possible-frontend-5-minute-outage-during-hotpatch-upgrade %}
- |
When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed.
- |
Services may respond with a `503` status due to an out of date `haproxy` configuration. This can usually be resolved with a `ghe-config-apply` run.
Loading

0 comments on commit 1b5e3de

Please sign in to comment.