Skip to content

Commit

Permalink
Merge branch 'main' into monitoring-capi
Browse files Browse the repository at this point in the history
  • Loading branch information
iritkatriel authored Apr 2, 2024
2 parents c605827 + e3b6f28 commit a6d45c4
Show file tree
Hide file tree
Showing 69 changed files with 4,163 additions and 3,339 deletions.
61 changes: 48 additions & 13 deletions Doc/library/asyncio-task.rst
Original file line number Diff line number Diff line change
Expand Up @@ -867,19 +867,50 @@ Waiting Primitives

.. function:: as_completed(aws, *, timeout=None)

Run :ref:`awaitable objects <asyncio-awaitables>` in the *aws*
iterable concurrently. Return an iterator of coroutines.
Each coroutine returned can be awaited to get the earliest next
result from the iterable of the remaining awaitables.

Raises :exc:`TimeoutError` if the timeout occurs before
all Futures are done.

Example::

for coro in as_completed(aws):
earliest_result = await coro
# ...
Run :ref:`awaitable objects <asyncio-awaitables>` in the *aws* iterable
concurrently. The returned object can be iterated to obtain the results
of the awaitables as they finish.

The object returned by ``as_completed()`` can be iterated as an
:term:`asynchronous iterator` or a plain :term:`iterator`. When asynchronous
iteration is used, the originally-supplied awaitables are yielded if they
are tasks or futures. This makes it easy to correlate previously-scheduled
tasks with their results. Example::

ipv4_connect = create_task(open_connection("127.0.0.1", 80))
ipv6_connect = create_task(open_connection("::1", 80))
tasks = [ipv4_connect, ipv6_connect]

async for earliest_connect in as_completed(tasks):
# earliest_connect is done. The result can be obtained by
# awaiting it or calling earliest_connect.result()
reader, writer = await earliest_connect

if earliest_connect is ipv6_connect:
print("IPv6 connection established.")
else:
print("IPv4 connection established.")

During asynchronous iteration, implicitly-created tasks will be yielded for
supplied awaitables that aren't tasks or futures.

When used as a plain iterator, each iteration yields a new coroutine that
returns the result or raises the exception of the next completed awaitable.
This pattern is compatible with Python versions older than 3.13::

ipv4_connect = create_task(open_connection("127.0.0.1", 80))
ipv6_connect = create_task(open_connection("::1", 80))
tasks = [ipv4_connect, ipv6_connect]

for next_connect in as_completed(tasks):
# next_connect is not one of the original task objects. It must be
# awaited to obtain the result value or raise the exception of the
# awaitable that finishes next.
reader, writer = await next_connect

A :exc:`TimeoutError` is raised if the timeout occurs before all awaitables
are done. This is raised by the ``async for`` loop during asynchronous
iteration or by the coroutines yielded during plain iteration.

.. versionchanged:: 3.10
Removed the *loop* parameter.
Expand All @@ -891,6 +922,10 @@ Waiting Primitives
.. versionchanged:: 3.12
Added support for generators yielding tasks.

.. versionchanged:: 3.13
The result can now be used as either an :term:`asynchronous iterator`
or as a plain :term:`iterator` (previously it was only a plain iterator).


Running in Threads
==================
Expand Down
13 changes: 8 additions & 5 deletions Doc/library/subprocess.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,12 @@ underlying :class:`Popen` interface can be used directly.

If *capture_output* is true, stdout and stderr will be captured.
When used, the internal :class:`Popen` object is automatically created with
``stdout=PIPE`` and ``stderr=PIPE``. The *stdout* and *stderr* arguments may
not be supplied at the same time as *capture_output*. If you wish to capture
and combine both streams into one, use ``stdout=PIPE`` and ``stderr=STDOUT``
instead of *capture_output*.
*stdout* and *stdin* both set to :data:`~subprocess.PIPE`.
The *stdout* and *stderr* arguments may not be supplied at the same time as *capture_output*.
If you wish to capture and combine both streams into one,
set *stdout* to :data:`~subprocess.PIPE`
and *stderr* to :data:`~subprocess.STDOUT`,
instead of using *capture_output*.

A *timeout* may be specified in seconds, it is internally passed on to
:meth:`Popen.communicate`. If the timeout expires, the child process will be
Expand All @@ -69,7 +71,8 @@ underlying :class:`Popen` interface can be used directly.
subprocess's stdin. If used it must be a byte sequence, or a string if
*encoding* or *errors* is specified or *text* is true. When
used, the internal :class:`Popen` object is automatically created with
``stdin=PIPE``, and the *stdin* argument may not be used as well.
*stdin* set to :data:`~subprocess.PIPE`,
and the *stdin* argument may not be used as well.

If *check* is true, and the process exits with a non-zero exit code, a
:exc:`CalledProcessError` exception will be raised. Attributes of that
Expand Down
2 changes: 1 addition & 1 deletion Doc/library/xml.etree.elementtree.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ and its sub-elements are done on the :class:`Element` level.
Parsing XML
^^^^^^^^^^^

We'll be using the following XML document as the sample data for this section:
We'll be using the fictive :file:`country_data.xml` XML document as the sample data for this section:

.. code-block:: xml
Expand Down
12 changes: 12 additions & 0 deletions Doc/whatsnew/3.13.rst
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,13 @@ asyncio
forcefully close an asyncio server.
(Contributed by Pierre Ossman in :gh:`113538`.)

* :func:`asyncio.as_completed` now returns an object that is both an
:term:`asynchronous iterator` and a plain :term:`iterator` of awaitables.
The awaitables yielded by asynchronous iteration include original task or
future objects that were passed in, making it easier to associate results
with the tasks being completed.
(Contributed by Justin Arthur in :gh:`77714`.)

base64
------

Expand Down Expand Up @@ -806,6 +813,11 @@ Deprecated
translation was not found.
(Contributed by Serhiy Storchaka in :gh:`88434`.)

* :mod:`glob`: The undocumented :func:`!glob.glob0` and :func:`!glob.glob1`
functions are deprecated. Use :func:`glob.glob` and pass a directory to its
*root_dir* argument instead.
(Contributed by Barney Gale in :gh:`117337`.)

* :mod:`http.server`: :class:`http.server.CGIHTTPRequestHandler` now emits a
:exc:`DeprecationWarning` as it will be removed in 3.15. Process-based CGI
HTTP servers have been out of favor for a very long time. This code was
Expand Down
6 changes: 4 additions & 2 deletions Grammar/python.gram
Original file line number Diff line number Diff line change
Expand Up @@ -1013,6 +1013,7 @@ kwargs[asdl_seq*]:
starred_expression[expr_ty]:
| invalid_starred_expression
| '*' a=expression { _PyAST_Starred(a, Load, EXTRA) }
| '*' { RAISE_SYNTAX_ERROR("Invalid star expression") }

kwarg_or_starred[KeywordOrStarred*]:
| invalid_kwarg
Expand Down Expand Up @@ -1133,8 +1134,8 @@ func_type_comment[Token*]:

# From here on, there are rules for invalid syntax with specialised error messages
invalid_arguments:
| ((','.(starred_expression | ( assignment_expression | expression !':=') !'=')+ ',' kwargs) | kwargs) ',' b='*' {
RAISE_SYNTAX_ERROR_KNOWN_LOCATION(b, "iterable argument unpacking follows keyword argument unpacking") }
| ((','.(starred_expression | ( assignment_expression | expression !':=') !'=')+ ',' kwargs) | kwargs) a=',' ','.(starred_expression !'=')+ {
RAISE_SYNTAX_ERROR_STARTING_FROM(a, "iterable argument unpacking follows keyword argument unpacking") }
| a=expression b=for_if_clauses ',' [args | expression for_if_clauses] {
RAISE_SYNTAX_ERROR_KNOWN_RANGE(a, _PyPegen_get_last_comprehension_item(PyPegen_last_item(b, comprehension_ty)), "Generator expression must be parenthesized") }
| a=NAME b='=' expression for_if_clauses {
Expand Down Expand Up @@ -1396,6 +1397,7 @@ invalid_kvpair:
| expression a=':' &('}'|',') {RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "expression expected after dictionary key and ':'") }
invalid_starred_expression:
| a='*' expression '=' b=expression { RAISE_SYNTAX_ERROR_KNOWN_RANGE(a, b, "cannot assign to iterable argument unpacking") }

invalid_replacement_field:
| '{' a='=' { RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "f-string: valid expression required before '='") }
| '{' a='!' { RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "f-string: valid expression required before '!'") }
Expand Down
20 changes: 0 additions & 20 deletions Include/cpython/compile.h
Original file line number Diff line number Diff line change
Expand Up @@ -32,28 +32,8 @@ typedef struct {
#define _PyCompilerFlags_INIT \
(PyCompilerFlags){.cf_flags = 0, .cf_feature_version = PY_MINOR_VERSION}

/* source location information */
typedef struct {
int lineno;
int end_lineno;
int col_offset;
int end_col_offset;
} _PyCompilerSrcLocation;

#define SRC_LOCATION_FROM_AST(n) \
(_PyCompilerSrcLocation){ \
.lineno = (n)->lineno, \
.end_lineno = (n)->end_lineno, \
.col_offset = (n)->col_offset, \
.end_col_offset = (n)->end_col_offset }

/* Future feature support */

typedef struct {
int ff_features; /* flags set by future statements */
_PyCompilerSrcLocation ff_location; /* location of last future statement */
} PyFutureFeatures;

#define FUTURE_NESTED_SCOPES "nested_scopes"
#define FUTURE_GENERATORS "generators"
#define FUTURE_DIVISION "division"
Expand Down
3 changes: 1 addition & 2 deletions Include/cpython/pystats.h
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,11 @@ typedef struct _object_stats {
uint64_t frees;
uint64_t to_freelist;
uint64_t from_freelist;
uint64_t new_values;
uint64_t inline_values;
uint64_t dict_materialized_on_request;
uint64_t dict_materialized_new_key;
uint64_t dict_materialized_too_big;
uint64_t dict_materialized_str_subclass;
uint64_t dict_dematerialized;
uint64_t type_cache_hits;
uint64_t type_cache_misses;
uint64_t type_cache_dunder_hits;
Expand Down
5 changes: 4 additions & 1 deletion Include/internal/pycore_code.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,10 @@ typedef struct {
typedef struct {
uint16_t counter;
uint16_t type_version[2];
uint16_t keys_version[2];
union {
uint16_t keys_version[2];
uint16_t dict_offset;
};
uint16_t descr[4];
} _PyLoadMethodCache;

Expand Down
8 changes: 5 additions & 3 deletions Include/internal/pycore_compile.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ extern "C" {
# error "this header requires Py_BUILD_CORE define"
#endif

#include "pycore_symtable.h" // _Py_SourceLocation

struct _arena; // Type defined in pycore_pyarena.h
struct _mod; // Type defined in pycore_ast.h

Expand All @@ -27,7 +29,7 @@ extern int _PyCompile_AstOptimize(
int optimize,
struct _arena *arena);

static const _PyCompilerSrcLocation NO_LOCATION = {-1, -1, -1, -1};
struct _Py_SourceLocation;

extern int _PyAST_Optimize(
struct _mod *,
Expand All @@ -44,7 +46,7 @@ typedef struct {
typedef struct {
int i_opcode;
int i_oparg;
_PyCompilerSrcLocation i_loc;
_Py_SourceLocation i_loc;
_PyCompile_ExceptHandlerInfo i_except_handler_info;

/* Used by the assembler */
Expand All @@ -65,7 +67,7 @@ typedef struct {
int _PyCompile_InstructionSequence_UseLabel(_PyCompile_InstructionSequence *seq, int lbl);
int _PyCompile_InstructionSequence_Addop(_PyCompile_InstructionSequence *seq,
int opcode, int oparg,
_PyCompilerSrcLocation loc);
_Py_SourceLocation loc);
int _PyCompile_InstructionSequence_ApplyLabelMap(_PyCompile_InstructionSequence *seq);

typedef struct {
Expand Down
64 changes: 52 additions & 12 deletions Include/internal/pycore_dict.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ extern "C" {

#include "pycore_freelist.h" // _PyFreeListState
#include "pycore_identifier.h" // _Py_Identifier
#include "pycore_object.h" // PyDictOrValues
#include "pycore_object.h" // PyManagedDictPointer

// Unsafe flavor of PyDict_GetItemWithError(): no error checking
extern PyObject* _PyDict_GetItemWithError(PyObject *dp, PyObject *key);
Expand Down Expand Up @@ -181,6 +181,10 @@ struct _dictkeysobject {
* [-1] = prefix size. [-2] = used size. size[-2-n...] = insertion order.
*/
struct _dictvalues {
uint8_t capacity;
uint8_t size;
uint8_t embedded;
uint8_t valid;
PyObject *values[1];
};

Expand All @@ -196,6 +200,7 @@ static inline void* _DK_ENTRIES(PyDictKeysObject *dk) {
size_t index = (size_t)1 << dk->dk_log2_index_bytes;
return (&indices[index]);
}

static inline PyDictKeyEntry* DK_ENTRIES(PyDictKeysObject *dk) {
assert(dk->dk_kind == DICT_KEYS_GENERAL);
return (PyDictKeyEntry*)_DK_ENTRIES(dk);
Expand All @@ -211,9 +216,6 @@ static inline PyDictUnicodeEntry* DK_UNICODE_ENTRIES(PyDictKeysObject *dk) {
#define DICT_WATCHER_MASK ((1 << DICT_MAX_WATCHERS) - 1)
#define DICT_WATCHER_AND_MODIFICATION_MASK ((1 << (DICT_MAX_WATCHERS + DICT_WATCHED_MUTATION_BITS)) - 1)

#define DICT_VALUES_SIZE(values) ((uint8_t *)values)[-1]
#define DICT_VALUES_USED_SIZE(values) ((uint8_t *)values)[-2]

#ifdef Py_GIL_DISABLED
#define DICT_NEXT_VERSION(INTERP) \
(_Py_atomic_add_uint64(&(INTERP)->dict_state.global_version, DICT_VERSION_INCREMENT) + DICT_VERSION_INCREMENT)
Expand Down Expand Up @@ -246,25 +248,63 @@ _PyDict_NotifyEvent(PyInterpreterState *interp,
return DICT_NEXT_VERSION(interp) | (mp->ma_version_tag & DICT_WATCHER_AND_MODIFICATION_MASK);
}

extern PyObject *_PyObject_MakeDictFromInstanceAttributes(PyObject *obj, PyDictValues *values);
PyAPI_FUNC(bool) _PyObject_MakeInstanceAttributesFromDict(PyObject *obj, PyDictOrValues *dorv);
extern PyDictObject *_PyObject_MakeDictFromInstanceAttributes(PyObject *obj);

PyAPI_FUNC(PyObject *)_PyDict_FromItems(
PyObject *const *keys, Py_ssize_t keys_offset,
PyObject *const *values, Py_ssize_t values_offset,
Py_ssize_t length);

static inline uint8_t *
get_insertion_order_array(PyDictValues *values)
{
return (uint8_t *)&values->values[values->capacity];
}

static inline void
_PyDictValues_AddToInsertionOrder(PyDictValues *values, Py_ssize_t ix)
{
assert(ix < SHARED_KEYS_MAX_SIZE);
uint8_t *size_ptr = ((uint8_t *)values)-2;
int size = *size_ptr;
assert(size+2 < DICT_VALUES_SIZE(values));
size++;
size_ptr[-size] = (uint8_t)ix;
*size_ptr = size;
int size = values->size;
uint8_t *array = get_insertion_order_array(values);
assert(size < values->capacity);
assert(((uint8_t)ix) == ix);
array[size] = (uint8_t)ix;
values->size = size+1;
}

static inline size_t
shared_keys_usable_size(PyDictKeysObject *keys)
{
#ifdef Py_GIL_DISABLED
// dk_usable will decrease for each instance that is created and each
// value that is added. dk_nentries will increase for each value that
// is added. We want to always return the right value or larger.
// We therefore increase dk_nentries first and we decrease dk_usable
// second, and conversely here we read dk_usable first and dk_entries
// second (to avoid the case where we read entries before the increment
// and read usable after the decrement)
return (size_t)(_Py_atomic_load_ssize_acquire(&keys->dk_usable) +
_Py_atomic_load_ssize_acquire(&keys->dk_nentries));
#else
return (size_t)keys->dk_nentries + (size_t)keys->dk_usable;
#endif
}

static inline size_t
_PyInlineValuesSize(PyTypeObject *tp)
{
PyDictKeysObject *keys = ((PyHeapTypeObject*)tp)->ht_cached_keys;
assert(keys != NULL);
size_t size = shared_keys_usable_size(keys);
size_t prefix_size = _Py_SIZE_ROUND_UP(size, sizeof(PyObject *));
assert(prefix_size < 256);
return prefix_size + (size + 1) * sizeof(PyObject *);
}

int
_PyDict_DetachFromObject(PyDictObject *dict, PyObject *obj);

#ifdef __cplusplus
}
#endif
Expand Down
2 changes: 1 addition & 1 deletion Include/internal/pycore_flowgraph.h
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ typedef struct {
struct _PyCfgBuilder;

int _PyCfgBuilder_UseLabel(struct _PyCfgBuilder *g, _PyCfgJumpTargetLabel lbl);
int _PyCfgBuilder_Addop(struct _PyCfgBuilder *g, int opcode, int oparg, _PyCompilerSrcLocation loc);
int _PyCfgBuilder_Addop(struct _PyCfgBuilder *g, int opcode, int oparg, _Py_SourceLocation loc);

struct _PyCfgBuilder* _PyCfgBuilder_New(void);
void _PyCfgBuilder_Free(struct _PyCfgBuilder *g);
Expand Down
Loading

0 comments on commit a6d45c4

Please sign in to comment.