Skip to content

Commit

Permalink
Fix a spelling error: ie. → i.e. (scrapy#4338)
Browse files Browse the repository at this point in the history
  • Loading branch information
akshaysharmajs authored Feb 18, 2020
1 parent 320cea6 commit 182445f
Show file tree
Hide file tree
Showing 17 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion docs/intro/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ using the :ref:`Scrapy shell <topics-shell>`. Run::
.. note::

Remember to always enclose urls in quotes when running Scrapy shell from
command-line, otherwise urls containing arguments (ie. ``&`` character)
command-line, otherwise urls containing arguments (i.e. ``&`` character)
will not work.

On Windows, use double quotes instead::
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/downloader-middleware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -259,8 +259,8 @@ COOKIES_DEBUG

Default: ``False``

If enabled, Scrapy will log all cookies sent in requests (ie. ``Cookie``
header) and all cookies received in responses (ie. ``Set-Cookie`` header).
If enabled, Scrapy will log all cookies sent in requests (i.e. ``Cookie``
header) and all cookies received in responses (i.e. ``Set-Cookie`` header).

Here's an example of a log with :setting:`COOKIES_DEBUG` enabled::

Expand Down
6 changes: 3 additions & 3 deletions docs/topics/extensions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ but disabled unless the :setting:`HTTPCACHE_ENABLED` setting is set.
Disabling an extension
======================

In order to disable an extension that comes enabled by default (ie. those
In order to disable an extension that comes enabled by default (i.e. those
included in the :setting:`EXTENSIONS_BASE` setting) you must set its order to
``None``. For example::

Expand Down Expand Up @@ -345,7 +345,7 @@ signal is received. The information dumped is the following:
After the stack trace and engine status is dumped, the Scrapy process continues
running normally.

This extension only works on POSIX-compliant platforms (ie. not Windows),
This extension only works on POSIX-compliant platforms (i.e. not Windows),
because the `SIGQUIT`_ and `SIGUSR2`_ signals are not available on Windows.

There are at least two ways to send Scrapy the `SIGQUIT`_ signal:
Expand All @@ -370,7 +370,7 @@ running normally.

For more info see `Debugging in Python`_.

This extension only works on POSIX-compliant platforms (ie. not Windows).
This extension only works on POSIX-compliant platforms (i.e. not Windows).

.. _Python debugger: https://docs.python.org/2/library/pdb.html
.. _Debugging in Python: https://pythonconquerstheuniverse.wordpress.com/2009/09/10/debugging-in-python/
2 changes: 1 addition & 1 deletion docs/topics/feed-exports.rst
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ FEED_STORE_EMPTY

Default: ``False``

Whether to export empty feeds (ie. feeds with no items).
Whether to export empty feeds (i.e. feeds with no items).

.. setting:: FEED_STORAGES

Expand Down
2 changes: 1 addition & 1 deletion docs/topics/jobs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Job directory

To enable persistence support you just need to define a *job directory* through
the ``JOBDIR`` setting. This directory will be for storing all required data to
keep the state of a single job (ie. a spider run). It's important to note that
keep the state of a single job (i.e. a spider run). It's important to note that
this directory must not be shared by different spiders, or even different
jobs/runs of the same spider, as it's meant to be used for storing the state of
a *single* job.
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/link-extractors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ LxmlLinkExtractor
:type allow: a regular expression (or list of)

:param deny: a single regular expression (or list of regular expressions)
that the (absolute) urls must match in order to be excluded (ie. not
that the (absolute) urls must match in order to be excluded (i.e. not
extracted). It has precedence over the ``allow`` parameter. If not
given (or empty) it won't exclude any links.
:type deny: a regular expression (or list of)
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/request-response.rst
Original file line number Diff line number Diff line change
Expand Up @@ -664,7 +664,7 @@ Response objects
.. attribute:: Response.meta

A shortcut to the :attr:`Request.meta` attribute of the
:attr:`Response.request` object (ie. ``self.request.meta``).
:attr:`Response.request` object (i.e. ``self.request.meta``).

Unlike the :attr:`Response.request` attribute, the :attr:`Response.meta`
attribute is propagated along redirects and retries, so you will get
Expand Down Expand Up @@ -760,7 +760,7 @@ TextResponse objects
1. the encoding passed in the ``__init__`` method ``encoding`` argument

2. the encoding declared in the Content-Type HTTP header. If this
encoding is not valid (ie. unknown), it is ignored and the next
encoding is not valid (i.e. unknown), it is ignored and the next
resolution mechanism is tried.

3. the encoding declared in the response body. The TextResponse class
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/selectors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -986,7 +986,7 @@ a :class:`~scrapy.http.HtmlResponse` object like this::
sel = Selector(html_response)

1. Select all ``<h1>`` elements from an HTML response body, returning a list of
:class:`Selector` objects (ie. a :class:`SelectorList` object)::
:class:`Selector` objects (i.e. a :class:`SelectorList` object)::

sel.xpath("//h1")

Expand All @@ -1013,7 +1013,7 @@ instantiated with an :class:`~scrapy.http.XmlResponse` object::
sel = Selector(xml_response)

1. Select all ``<product>`` elements from an XML response body, returning a list
of :class:`Selector` objects (ie. a :class:`SelectorList` object)::
of :class:`Selector` objects (i.e. a :class:`SelectorList` object)::

sel.xpath("//product")

Expand Down
6 changes: 3 additions & 3 deletions docs/topics/settings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ CONCURRENT_REQUESTS

Default: ``16``

The maximum number of concurrent (ie. simultaneous) requests that will be
The maximum number of concurrent (i.e. simultaneous) requests that will be
performed by the Scrapy downloader.

.. setting:: CONCURRENT_REQUESTS_PER_DOMAIN
Expand All @@ -258,7 +258,7 @@ CONCURRENT_REQUESTS_PER_DOMAIN

Default: ``8``

The maximum number of concurrent (ie. simultaneous) requests that will be
The maximum number of concurrent (i.e. simultaneous) requests that will be
performed to any single domain.

See also: :ref:`topics-autothrottle` and its
Expand All @@ -272,7 +272,7 @@ CONCURRENT_REQUESTS_PER_IP

Default: ``0``

The maximum number of concurrent (ie. simultaneous) requests that will be
The maximum number of concurrent (i.e. simultaneous) requests that will be
performed to any single IP. If non-zero, the
:setting:`CONCURRENT_REQUESTS_PER_DOMAIN` setting is ignored, and this one is
used instead. In other words, concurrency limits will be applied per IP, not
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/signals.rst
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ item_error
.. signal:: item_error
.. function:: item_error(item, response, spider, failure)

Sent when a :ref:`topics-item-pipeline` generates an error (ie. raises
Sent when a :ref:`topics-item-pipeline` generates an error (i.e. raises
an exception), except :exc:`~scrapy.exceptions.DropItem` exception.

This signal supports returning deferreds from their handlers.
Expand Down Expand Up @@ -232,7 +232,7 @@ spider_error
.. signal:: spider_error
.. function:: spider_error(failure, response, spider)

Sent when a spider callback generates an error (ie. raises an exception).
Sent when a spider callback generates an error (i.e. raises an exception).

This signal does not support returning deferreds from their handlers.

Expand Down
2 changes: 1 addition & 1 deletion scrapy/shell.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ def _request_deferred(request):
This returns a Deferred whose first pair of callbacks are the request
callback and errback. The Deferred also triggers when the request
callback/errback is executed (ie. when the request is downloaded)
callback/errback is executed (i.e. when the request is downloaded)
WARNING: Do not call request.replace() until after the deferred is called.
"""
Expand Down
4 changes: 2 additions & 2 deletions scrapy/utils/misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ def arg_to_iter(arg):
def load_object(path):
"""Load an object given its absolute object path, and return it.
object can be a class, function, variable or an instance.
path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'
object can be the import path of a class, function, variable or an
instance, e.g. 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'
"""

try:
Expand Down
2 changes: 1 addition & 1 deletion scrapy/utils/request.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def request_fingerprint(request, include_headers=None, keep_fragments=False):
http://www.example.com/query?cat=222&id=111
Even though those are two different URLs both point to the same resource
and are equivalent (ie. they should return the same response).
and are equivalent (i.e. they should return the same response).
Another example are cookies used to store session ids. Suppose the
following page is only accessible to authenticated users:
Expand Down
2 changes: 1 addition & 1 deletion scrapy/utils/spider.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def iterate_spider_output(result):

def iter_spider_classes(module):
"""Return an iterator over all spider classes defined in the given module
that can be instantiated (ie. which have name)
that can be instantiated (i.e. which have name)
"""
# this needs to be imported here until get rid of the spider manager
# singleton in scrapy.spider.spiders
Expand Down
4 changes: 2 additions & 2 deletions sep/sep-003.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Prerequisites

This API proposal relies on the following API:

1. instantiating a item with an item instance as its first argument (ie.
1. instantiating a item with an item instance as its first argument (i.e.
``item2 = MyItem(item1)``) must return a **copy** of the first item
instance)
2. items can be instantiated using this syntax: ``item = Item(attr1=value1,
Expand Down Expand Up @@ -78,7 +78,7 @@ Defining an item containing ItemField's
variants2 = ListField(ItemField(Variant), default=[])

It's important to note here that the (perhaps most intuitive) way of defining a
Product-Variant relationship (ie. defining a recursive !ItemField) doesn't
Product-Variant relationship (i.e. defining a recursive !ItemField) doesn't
work. For example, this fails to compile:

::
Expand Down
2 changes: 1 addition & 1 deletion sep/sep-013.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Global changes to all middlewares

To be discussed:

1. should we support returning deferreds (ie. ``maybeDeferred``) in middleware
1. should we support returning deferreds (i.e. ``maybeDeferred``) in middleware
methods?
2. should we pass Twisted Failures instead of exceptions to error methods?

Expand Down
2 changes: 1 addition & 1 deletion sep/sep-021.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Goals:

* simple to manage: adding or removing extensions should be just a matter of
adding or removing lines in a ``scrapy.cfg`` file
* backward compatibility with enabling extension the "old way" (ie. modifying
* backward compatibility with enabling extension the "old way" (i.e. modifying
settings directly)

Non-goals:
Expand Down

0 comments on commit 182445f

Please sign in to comment.