Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Havana #1

Open
wants to merge 3,522 commits into
base: master
Choose a base branch
from
Open

Havana #1

wants to merge 3,522 commits into from

Conversation

herricklai
Copy link

When a user authenticates against Identity V3 API, he can specify
multiple authentication methods. This patch removes duplicates, which
could have been used to achieve DoS attacks.

Closes-Bug: 1300274
(cherry picked from commit ef868ad)
Cherry-pick from https://review.openstack.org/#/c/84425/

Change-Id: I6e60324309baa094a5e54b012fb0fc528fea72ab
(cherry picked from commit e364ba5)

wu-wenxiang and others added 30 commits August 22, 2013 08:53
Modify tests/test_v3_auth.py, remove the useless arg ("start index" = 0)
since its default value is 0. There was no range(0, N) being called in
any other place except here in keystone sources.

Change-Id: Ifce3384982476a7b9884a51aa73fdb45798e3051
Change-Id: Id62bd77950681db0303f6b3bc1a630aa59dd40c1
Fixes: Bug #1214135
Looks like when the token sql backend was going through tokens
to delete, it didn't like if the token ref didn't have a token
field. I placed a guard in there now.

fixes bug: #1215493

Change-Id: Ia12f5e4d9af71c322c9230464ae39ec88303b600
Merging ec2 credentials into the credentials
table to simplify management of ec2
credentials.

blueprint migrate-ec2-credentials

Change-Id: I8f83c007a44857ca41d7ef23f70cb9718d83ca5d
Messages created statically (during import) were not being
translated in responses when the Accept-Language header was
used to set the expected language in the response. The static
messages were being created before the _ built-in had been
installed by gettextutils.install().

Change-Id: Ie56b1d3a836bc5f2262d7af68f803a08ebdf016f
Resolves-Bug: #1215192
Remove en_US as the default language when no header is provided, and use
None instead. Upon translation None will be defaulted to system as it
was before the translation changes.

Fixes bug: #1214476

Change-Id: Ice7f55912a7bd6c727a7f8a2a1172871fe27a3dc
Change-Id: I158e916f1f422882b83f648d7720f8f87a4c5813
Fixes: Bug #1215482
Several warnings were generated from keystone-manage when
building docs. Also, the options without a description
didn't display correctly in the rendered result.

Also, I added a description for the new db_version subcommand.

To generate the new option list, I ran keystone-manage --help
and copy-pasted the options into the doc.

Change-Id: I1a405ca03d894c9c3e0f6b3bfccc9bcfcce1302d
When using Keystone against an Active Directory server, assigned
roles weren't found for users.

When roles are added as DNs in the roleOccupant attribute, an LDAP
server can normalize the value so that when the entry is read later
the roleOccupant isn't exactly the same as it was when added.
Keystone should compare users by ID rather than by DN.
(Note that this is how the comparison is done in Grizzly.)

Keystone's fake LDAP is changed to muck with roleOccupant and
member DNs by uppercasing attribute names (like Active Directory).
The code is fixed to compare users by ID rather than DN.

Change-Id: Iaa41c3ef9febcabef0662f38b13d319a5b5583bc
Resolves-Bug: #1210675
flake8 will read the section [flake8] of tox.ini to guide the
behavior. In run_tests.sh, the function run_flake8() does not
need the falke8 option any more.

Fixes Bug #1214159
Change-Id: I1bbba279a339e35c776a6969598b398f57fd7646
There was a cut-n-paste bug where self._set_permissions() was called
with the exact same filename, self.ssl_config_file_name, instead of
the index and serial filenames. This patch uses the index and serial
filenames as was the original intent.

Change-Id: I571c766ac746bbbc1bedfdf1ff2b1b86363a0af0
Fixes: bug #1206254
Change-Id: I4872cd24511df995b023a47edad3176a54a49504
OS-EP-FILTER Implementation

There are new methods to create endpoint and project associations.
A full CRUD API to assign projects to endpoints as well as
the ability to check all the projects associated with a given
endpoint.

The association is used to pick what endpoints are visible
for the given project and a filtered catalog is built
accordingly.

During a project-scoped token request, if project-endpoint
associations have been created, the returned catalog will only
list the project linked endpoints.

blueprint endpoint-filtering

Change-Id: Idaa7f448a67e3bae01ba12686be37ba058183cf6
Implements core elements of Keystone Caching layer. dogpile.cache
is used as the caching library to provide flexibility to the cache
backend (out of the box, Redis, Memcached, File, and in-Memory).

The keystone.common.cache.on_arguments decorator is used cache the
return value of method and functions that are meant to be cached.

The default behavior is to not cache anything (using a no-op cacher).

developing.rst has been updated to give an outline on how to approach
adding the caching layer onto a manager object.

Subsequent patches will build upon this code to enable caching across
the various keystone subsystems.

DocImpact

partial-blueprint: caching-layer-for-driver-calls
Change-Id: I664ddccd8d1d393cf50d24054f946512093fb790
Based upon the Keystone caching (using dogpile.cache) implementation
token revocation list caching is implemented in this patchset.

The following methods are cached:
    * token_api.list_revoked_tokens

Calls to token_api.delete_token and token_api.delete_tokens will
properly invalidate the cache for the revocation list.

Reworked some of the caching tests to allow for more in-depth
tests of the cache layer.

DocImpact

partial-blueprint: caching-layer-for-driver-calls
Change-Id: I2bc821fa68035884dfb885b17c051f3023e7a9f6
The Token Provider was not aware of expired tokens, and would
not raise Unauthorized if a token was expired when asked
to "validate" (validate_token, validate_v2_token, validate_v3_token)
or "check" (check_v2_token, check_v3_token) it. The assumption was
that the Token driver would never return an expired token  from
the token_api.get_token method.  When token caching is implemented,
this assumption can no longer be made.

This patchset updates the provider to make it capable of inspecting
token data returned by the token validation methods on the driver.
If a token is expired it will properly raise Unauthorized.

Refactored the Token Provider to noi longer use a separate code-path
to "validate" or "check" tokens.  Check now benefits from the code
to ensure a token is still valid.

Since caching is implemented at the manager level, the expiration
check is done in the manager (above the driver), it is expected
that the manager needs to be expiration aware (responses from the
driver may be cached).

partial-blueprint: caching-layer-for-driver-calls
Change-Id: I2caa4cb47ba1d3a33746fc00f672a5b8fe319bd6
API policy protection is currently limited to using the parameters
passed into the call. However, there are many cases where you want
to also check attributes of the entities an API is operating upon.  The
classic example is ensuring a domain administrator cannot get, update or
delete users, groups or projects outside of their domain.

This patch enables lines in the policy file to also refer to any field
in the target object of the API call. In addition, it includes a separate
sample policy file that shows how to use domains and the new protection
ability to provide domain segregation and administration delegation.
This sample file is also tested to ensure that such protection works
correctly.

DocImpact

Implements bp policy-on-api-target

Change-Id: Ie1a4e14a86d27e8b60e6c17e33dd6b9fa889660c
Mark T. Voelker and others added 30 commits January 9, 2014 13:45
netifaces is not required to run the tests, so remove it from
the requirements.

Related-Bug: #1266513
Change-Id: Ifb3b262f47d629670b06c670353dbe798af4dc03
Handles common UTF8 encoding and decoding situations.

Related-Bug: 1253905
Change-Id: Ia5e743afd59f33bfc7006ff98c39e32e63733803
Database Errors from SQLAlchemy tend to get mangled into ASCII. If an
error is process that contains UTF-8 data in ASCII form if fails to
build the message and will lead to a 5xx error. Catch the error and try
to decode it to create the message again, if that doesn't work at least
fail gracefully.

Closes-Bug: 1253905
Change-Id: Iecb1170387c51064918780dc6de07db7ca8aeeee
(cherry picked from commit dcefe58)
According to the docs, the list responses should not contain
the roles, only the detailed response when you get a trust
explicitly by ID.  So remove the roles and modify the tests
appropriately.

Note it was also observed that expires_at is present in all
GET resonses, but not in the docs, but this has been agreed
as a docs error so will be addressed via a docs patch.

Change-Id: I5387021a53f3284add9e5e71e9e005c4dd31b76c
Closes-Bug: #1245590
(cherry picked from commit ab0e2c7)
Ibf28ba17f Remove the notifier and its dependencies from log.py

Move the code related to the publish error handler out of the
log module so its easier for other projects to consume it

Closes-bug: #1240349
(cherry picked from commit 1a961bf)

Conflicts:
	openstack-common.conf

Change-Id: Ib97bc01b60d7ea6c2e6bc3f0229deffbadbf18cc
The mock library is added to test-requirements.txt since the
mockpatch fixture requires it.

Change-Id: I1b5b0c75f256382a685fceb2117db6d5b18d8c4f
(cherry picked from commit 07aa0a9)
This consists of the following 3 patches:

    Narrow columns used in list_revoked_tokens sql

    Currently the SQL backend lists revoked tokens by selecting all of the
    columns, including the massive "extra" column. This places a significant
    burden on the client library and wastes resources. We only need the
    id/expired columns to satisfy the API call.

    In tests this query was several orders of magnitude faster with just two
    thousand un-expired revoked tokens.
    (cherry picked from commit ab72212)

    Add index to cover revoked token list

    The individual expires and valid indexes do not fully cover the most
    common query, which is the one that lists revoked tokens.

    Because valid is only ever used in conjunction with expires, we do not
    need it to have its own index now that there is a covering compound
    index for expires and valid.

    Note that he expires index is still useful alone for purging old tokens
    as we do not filter for valid in that case.
    (cherry picked from commit dd2c80c)

    Remove unused token.valid index

    Because valid is only ever used in conjunction with expires, we do not
    need it to have its own index now that there is a covering compound
    index for expires and valid.

    Note that he expires index is still useful alone for purging old tokens
    as we do not filter for valid in that case.
    (cherry picked from commit 5d8a1a4)

Change-Id: I04d62b98d5d760a3fbc3c8db61530f7ebccb0a48
Closes-Bug: #1253755
Sync the following fixes from oslo-incubator:

e355fa3 Create a shared queue for QPID topic consumers
55678c7 Properly reconnect subscribing clients when QPID broker restarts
76972e2 Support a new qpid topology
7b0cb37 Don't eat callback exceptions

It also pulls unrelated fix for impl_kombu, to bring it in sync
with oslo-incubator stable/havana:
69abf38 requeue instead of reject

Closes-bug: #1251757
Closes-bug: #1257293
Closes-bug: #1178375

Change-Id: I45257c62168163d2d4ceda994c94ff2d16a27300
This eliminates the need to do a get on each token in the user's index
on token issuance. This change will change the maximum number of tokens
that can be outstanding for a given user. This change is two-fold, first
instead of using JSON to store the token IDs, the python list structure
is being stored in memcached; second the expiry for the token is also
stored in the list. The net result is that fewer tokens can be stored
in the user's token index list due to an increase in data being stored
per token in the index page.

The new logic will attempt to upgrade the old json-style lists to
the new format of [(token_id, expiry), ...] stored as a native
python object in the memcache backend. This conversion will keep
any outstanding tokens in the list from (<time_of_conversion> +
<configured expiration of tokens>). This is done to ensure that
tokens can still be invalidated by operations that invalidate
tokens based upon user/project/trust/etc changes without causing
potential lockups in keystone trying to retrieve all of the
actual token expiration times from memcache.

Closes-bug: #1251123
Change-Id: Ida39b4699ed6c568609a5121573fc3be5c4ab2f4
Change-Id: Ia5b940faf01df6cae58df1f4b4f5e8fa2f3f078a
Sorted out inclusion of bugfix for #1231339
list_projects_for_endpoint fails with "500 Internal Server Error".
Modified the function name to 'list_projects_for_endpoint' and
added three test cases.

Change-Id: Ibe86dd4fb845004d4ebce359de034f31bd50ecc9
Closes-Bug: #1269703
(cherry picked from commit 2f7fa55)
Tokens are now added to both the Trustor and Trustee user-token-index
so that bulk token revocations (e.g. password change) of the trustee
will work as expected. This is a backport of the basic code that was
used in the Icehouse-vintage Dogpile Token KVS backend that resolves
this issue by merging the handling of memcache and KVS backends into
the same logic.

Change-Id: I3e19e4a8fc1e11cef6db51d364e80061e97befa7
Closes-Bug: #1260080
Update a couple DELETE operations within the test_sql_upgrade test
case to support the more strict dialect checking that occurs in
0.9.3 of SQLAlchemy for "additional arguments".

Closes-Bug: #1286717
Change-Id: I82b57257a8b49d798d813c65e76757021676ba90
The ec2tokens controller incorrectly uses the access id, not the
hashed credential id in _assert_owner, which means that non-admin
users can't delete their ec2-credentials.  Adding the hashing, as
in _get_credentials fixes the problem.  Test added demonstrating
the issue.

Change-Id: Ifb6e3e10a50541cf21d25880bd74e9aeb6df4f26
Closes-Bug: #1245435
(cherry picked from commit 85ca6ac)
In an HA deployment, a 60 seconds delay between reconnects can be quite
problematic. This patch changes the delay calculation by setting the max
delay to 5s and by changing the way it is increased.

Unfortunately, this is one of the places where both our main drivers are
not consistent. Rabbit's driver uses configuration parameters for this
whereas qpid's driver has never had one. However, I would prefer not
adding configuration paremeters to qpid's driver for the following
reasons:

    1. Most of OpenStack services depend on the messaging layer, hence
    they need it to be available. A 5s delay seems to be reasonable and
    I could argue the need of tune it further. Although so frequent
    reconnects can add load to the network, that wouldn't be the main
    issue if one of the brokers go down.
    2. We're trying to move away from configuration options towards using
    transport URL. This path is still not clear and I would
    prefer avoiding adding new options until we clear it out.

Closes-bug: #1281148

Change-Id: I537015f452eb770acba41fdedfe221628f52a920
(cherry picked from commit 8b628d1e024f787dbb93d508117d9148388c0590)
When a user authenticates against Identity V3 API, he can specify
multiple authentication methods. This patch removes duplicates, which
could have been used to achieve DoS attacks.

Closes-Bug: 1300274
(cherry picked from commit ef868ad)
Cherry-pick from https://review.openstack.org/#/c/84425/

Change-Id: I6e60324309baa094a5e54b012fb0fc528fea72ab
(cherry picked from commit e364ba5)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.