Skip to content

Commit

Permalink
fix first letter capitalization for Raring and Scrapy
Browse files Browse the repository at this point in the history
  • Loading branch information
noviluni committed Dec 19, 2019
1 parent c0d84f0 commit 23a67ce
Show file tree
Hide file tree
Showing 14 changed files with 33 additions and 33 deletions.
2 changes: 1 addition & 1 deletion docs/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ guidelines when you're going to report a new bug.
* check the :ref:`FAQ <faq>` first to see if your issue is addressed in a
well-known question

* if you have a general question about scrapy usage, please ask it at
* if you have a general question about Scrapy usage, please ask it at
`Stack Overflow <https://stackoverflow.com/questions/tagged/scrapy>`__
(use "scrapy" tag).

Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Solving specific problems
Get answers to most frequently asked questions.

:doc:`topics/debug`
Learn how to debug common problems of your scrapy spider.
Learn how to debug common problems of your Scrapy spider.

:doc:`topics/contracts`
Learn how to use contracts for testing your spiders.
Expand Down
16 changes: 8 additions & 8 deletions docs/intro/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,9 +78,9 @@ TL;DR: We recommend installing Scrapy inside a virtual environment
on all platforms.

Python packages can be installed either globally (a.k.a system wide),
or in user-space. We do not recommend installing scrapy system wide.
or in user-space. We do not recommend installing Scrapy system wide.

Instead, we recommend that you install scrapy within a so-called
Instead, we recommend that you install Scrapy within a so-called
"virtual environment" (`virtualenv`_).
Virtualenvs allow you to not conflict with already-installed Python
system packages (which could break some of your system tools and scripts),
Expand All @@ -97,7 +97,7 @@ Check this `user guide`_ on how to create your virtualenv.
.. note::
If you use Linux or OS X, `virtualenvwrapper`_ is a handy tool to create virtualenvs.

Once you have created a virtualenv, you can install scrapy inside it with ``pip``,
Once you have created a virtualenv, you can install Scrapy inside it with ``pip``,
just like any other Python package.
(See :ref:`platform-specific guides <intro-install-platform-notes>`
below for non-Python dependencies that you may need to install beforehand).
Expand Down Expand Up @@ -144,7 +144,7 @@ albeit with potential issues with TLS connections.
typically too old and slow to catch up with latest Scrapy.


To install scrapy on Ubuntu (or Ubuntu-based) systems, you need to install
To install Scrapy on Ubuntu (or Ubuntu-based) systems, you need to install
these dependencies::

sudo apt-get install python3 python3-dev python3-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev
Expand Down Expand Up @@ -225,17 +225,17 @@ PyPy
We recommend using the latest PyPy version. The version tested is 5.9.0.
For PyPy3, only Linux installation was tested.

Most scrapy dependencides now have binary wheels for CPython, but not for PyPy.
Most Scrapy dependencides now have binary wheels for CPython, but not for PyPy.
This means that these dependecies will be built during installation.
On OS X, you are likely to face an issue with building Cryptography dependency,
solution to this problem is described
`here <https://github.com/pyca/cryptography/issues/2692#issuecomment-272773481>`_,
that is to ``brew install openssl`` and then export the flags that this command
recommends (only needed when installing scrapy). Installing on Linux has no special
recommends (only needed when installing Scrapy). Installing on Linux has no special
issues besides installing build dependencies.
Installing scrapy with PyPy on Windows is not tested.
Installing Scrapy with PyPy on Windows is not tested.

You can check that scrapy is installed correctly by running ``scrapy bench``.
You can check that Scrapy is installed correctly by running ``scrapy bench``.
If this command gives errors such as
``TypeError: ... got 2 unexpected keyword arguments``, this means
that setuptools was unable to pick up one PyPy-specific dependency.
Expand Down
20 changes: 10 additions & 10 deletions docs/news.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1360,7 +1360,7 @@ Documentation

- Grammar fixes: :issue:`2128`, :issue:`1566`.
- Download stats badge removed from README (:issue:`2160`).
- New scrapy :ref:`architecture diagram <topics-architecture>` (:issue:`2165`).
- New Scrapy :ref:`architecture diagram <topics-architecture>` (:issue:`2165`).
- Updated ``Response`` parameters documentation (:issue:`2197`).
- Reworded misleading :setting:`RANDOMIZE_DOWNLOAD_DELAY` description (:issue:`2190`).
- Add StackOverflow as a support channel (:issue:`2257`).
Expand Down Expand Up @@ -1450,7 +1450,7 @@ Documentation
- Use "url" variable in downloader middleware example (:issue:`2015`)
- Grammar fixes (:issue:`2054`, :issue:`2120`)
- New FAQ entry on using BeautifulSoup in spider callbacks (:issue:`2048`)
- Add notes about scrapy not working on Windows with Python 3 (:issue:`2060`)
- Add notes about Scrapy not working on Windows with Python 3 (:issue:`2060`)
- Encourage complete titles in pull requests (:issue:`2026`)

Tests
Expand Down Expand Up @@ -1509,7 +1509,7 @@ This 1.1 release brings a lot of interesting features and bug fixes:
You can use :setting:`FILES_STORE_S3_ACL` to change it.
- We've reimplemented ``canonicalize_url()`` for more correct output,
especially for URLs with non-ASCII characters (:issue:`1947`).
This could change link extractors output compared to previous scrapy versions.
This could change link extractors output compared to previous Scrapy versions.
This may also invalidate some cache entries you could still have from pre-1.1 runs.
**Warning: backward incompatible!**.

Expand Down Expand Up @@ -1722,7 +1722,7 @@ Scrapy 1.0.4 (2015-12-30)
- Merge pull request #1513 from mgedmin/patch-2 (:commit:`5d4daf8`)
- Typo (:commit:`f8d0682`)
- Fix list formatting (:commit:`5f83a93`)
- fix scrapy squeue tests after recent changes to queuelib (:commit:`3365c01`)
- fix Scrapy squeue tests after recent changes to queuelib (:commit:`3365c01`)
- Merge pull request #1475 from rweindl/patch-1 (:commit:`2d688cd`)
- Update tutorial.rst (:commit:`fbc1f25`)
- Merge pull request #1449 from rhoekman/patch-1 (:commit:`7d6538c`)
Expand All @@ -1734,7 +1734,7 @@ Scrapy 1.0.4 (2015-12-30)
Scrapy 1.0.3 (2015-08-11)
-------------------------

- add service_identity to scrapy install_requires (:commit:`cbc2501`)
- add service_identity to Scrapy install_requires (:commit:`cbc2501`)
- Workaround for travis#296 (:commit:`66af9cd`)

.. _release-1.0.2:
Expand Down Expand Up @@ -2411,7 +2411,7 @@ Enhancements
- scrapy.mail.MailSender now can connect over TLS or upgrade using STARTTLS (:issue:`327`)
- New FilesPipeline with functionality factored out from ImagesPipeline (:issue:`370`, :issue:`409`)
- Recommend Pillow instead of PIL for image handling (:issue:`317`)
- Added Debian packages for Ubuntu Quantal and raring (:commit:`86230c0`)
- Added Debian packages for Ubuntu Quantal and Raring (:commit:`86230c0`)
- Mock server (used for tests) can listen for HTTPS requests (:issue:`410`)
- Remove multi spider support from multiple core components
(:issue:`422`, :issue:`421`, :issue:`420`, :issue:`419`, :issue:`423`, :issue:`418`)
Expand Down Expand Up @@ -2516,7 +2516,7 @@ Scrapy 0.18.1 (released 2013-08-27)
- limit travis-ci build matrix (:commit:`3b01bb8`)
- Merge pull request #375 from peterarenot/patch-1 (:commit:`fa766d7`)
- Fixed so it refers to the correct folder (:commit:`3283809`)
- added Quantal & raring to support Ubuntu releases (:commit:`1411923`)
- added Quantal & Raring to support Ubuntu releases (:commit:`1411923`)
- fix retry middleware which didn't retry certain connection errors after the upgrade to http1 client, closes GH-373 (:commit:`bb35ed0`)
- fix XmlItemExporter in Python 2.7.4 and 2.7.5 (:commit:`de3e451`)
- minor updates to 0.18 release notes (:commit:`c45e5f1`)
Expand Down Expand Up @@ -2555,8 +2555,8 @@ Scrapy 0.18.0 (released 2013-08-09)
- Collect idle downloader slots (:issue:`297`)
- Add ``ftp://`` scheme downloader handler (:issue:`329`)
- Added downloader benchmark webserver and spider tools :ref:`benchmarking`
- Moved persistent (on disk) queues to a separate project (queuelib_) which scrapy now depends on
- Add scrapy commands using external libraries (:issue:`260`)
- Moved persistent (on disk) queues to a separate project (queuelib_) which Scrapy now depends on
- Add Scrapy commands using external libraries (:issue:`260`)
- Added ``--pdb`` option to ``scrapy`` command line tool
- Added :meth:`XPathSelector.remove_namespaces <scrapy.selector.Selector.remove_namespaces>` which allows to remove all namespaces from XML documents for convenience (to work with namespace-less XPaths). Documented in :ref:`topics-selectors`.
- Several improvements to spider contracts
Expand All @@ -2568,7 +2568,7 @@ Scrapy 0.18.0 (released 2013-08-09)
- several more cleanups to singletons and multi-spider support (thanks Nicolas Ramirez)
- support custom download slots
- added --spider option to "shell" command.
- log overridden settings when scrapy starts
- log overridden settings when Scrapy starts

Thanks to everyone who contribute to this release. Here is a list of
contributors sorted by number of commits::
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/autothrottle.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Design goals
============

1. be nicer to sites instead of using default download delay of zero
2. automatically adjust scrapy to the optimum crawling speed, so the user
2. automatically adjust Scrapy to the optimum crawling speed, so the user
doesn't have to tune the download delays to find the optimum one.
The user only needs to specify the maximum concurrent requests
it allows, and the extension does the rest.
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/commands.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ in standard locations:
1. ``/etc/scrapy.cfg`` or ``c:\scrapy\scrapy.cfg`` (system-wide),
2. ``~/.config/scrapy.cfg`` (``$XDG_CONFIG_HOME``) and ``~/.scrapy.cfg`` (``$HOME``)
for global (user-wide) settings, and
3. ``scrapy.cfg`` inside a scrapy project's root (see next section).
3. ``scrapy.cfg`` inside a Scrapy project's root (see next section).

Settings from these files are merged in the listed order of preference:
user-defined values have higher priority than system-wide defaults
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/contracts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Use the :command:`check` command to run the contract checks.
Custom Contracts
================

If you find you need more power than the built-in scrapy contracts you can
If you find you need more power than the built-in Scrapy contracts you can
create and load your own contracts in the project by using the
:setting:`SPIDER_CONTRACTS` setting::

Expand Down
2 changes: 1 addition & 1 deletion docs/topics/debug.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Debugging Spiders
=================

This document explains the most common techniques for debugging spiders.
Consider the following scrapy spider below::
Consider the following Scrapy spider below::

import scrapy
from myproject.items import MyItem
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/developer-tools.rst
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ expand and collapse a tag by clicking on the arrow in front of it or by double
clicking directly on the tag. If we expand the ``span`` tag with the ``class=
"text"`` we will see the quote-text we clicked on. The `Inspector` lets you
copy XPaths to selected elements. Let's try it out: Right-click on the ``span``
tag, select ``Copy > XPath`` and paste it in the scrapy shell like so::
tag, select ``Copy > XPath`` and paste it in the Scrapy shell like so::

$ scrapy shell "http://quotes.toscrape.com/"
(...)
Expand Down Expand Up @@ -159,7 +159,7 @@ The page is quite similar to the basic `quotes.toscrape.com`_-page,
but instead of the above-mentioned ``Next`` button, the page
automatically loads new quotes when you scroll to the bottom. We
could go ahead and try out different XPaths directly, but instead
we'll check another quite useful command from the scrapy shell::
we'll check another quite useful command from the Scrapy shell::

$ scrapy shell "quotes.toscrape.com/scroll"
(...)
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -171,9 +171,9 @@ listed in `logging's logrecord attributes docs
<https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior>`_
respectively.

If :setting:`LOG_SHORT_NAMES` is set, then the logs will not display the scrapy
If :setting:`LOG_SHORT_NAMES` is set, then the logs will not display the Scrapy
component that prints the log. It is unset by default, hence logs contain the
scrapy component responsible for that log output.
Scrapy component responsible for that log output.

Command-line options
--------------------
Expand Down
4 changes: 2 additions & 2 deletions docs/topics/settings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1264,7 +1264,7 @@ Default::
'scrapy.contracts.default.ScrapesContract': 3,
}

A dict containing the scrapy contracts enabled by default in Scrapy. You should
A dict containing the Scrapy contracts enabled by default in Scrapy. You should
never modify this setting in your project, modify :setting:`SPIDER_CONTRACTS`
instead. For more info see :ref:`topics-contracts`.

Expand Down Expand Up @@ -1295,7 +1295,7 @@ SPIDER_LOADER_WARN_ONLY

Default: ``False``

By default, when scrapy tries to import spider classes from :setting:`SPIDER_MODULES`,
By default, when Scrapy tries to import spider classes from :setting:`SPIDER_MODULES`,
it will fail loudly if there is any ``ImportError`` exception.
But you can choose to silence this exception and turn it into a simple
warning by setting ``SPIDER_LOADER_WARN_ONLY = True``.
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/shell.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ for more info.
Scrapy also has support for `bpython`_, and will try to use it where `IPython`_
is unavailable.

Through scrapy's settings you can configure it to use any one of
Through Scrapy's settings you can configure it to use any one of
``ipython``, ``bpython`` or the standard ``python`` shell, regardless of which
are installed. This is done by setting the ``SCRAPY_PYTHON_SHELL`` environment
variable; or by defining it in your :ref:`scrapy.cfg <topics-config-settings>`::
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/telnetconsole.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ the console you need to type::
>>>

By default Username is ``scrapy`` and Password is autogenerated. The
autogenerated Password can be seen on scrapy logs like the example below::
autogenerated Password can be seen on Scrapy logs like the example below::

2018-10-16 14:35:21 [scrapy.extensions.telnet] INFO: Telnet Password: 16f92501e8a59326

Expand Down
2 changes: 1 addition & 1 deletion sep/sep-004.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Here's a simple proof-of-concept code of such script:
# ... do something more interesting with scraped_items ...

The behaviour of the Scrapy crawler would be controller by the Scrapy settings,
naturally, just like any typical scrapy project. But the default settings
naturally, just like any typical Scrapy project. But the default settings
should be sufficient so as to not require adding any specific setting. But, at
the same time, you could do it if you need to, say, for specifying a custom
middleware.
Expand Down

0 comments on commit 23a67ce

Please sign in to comment.