This module provides a simple interface to get the Ecoindex of a given webpage using module ecoindex-python
- Python ^3.10 with pip
- Google Chrome installed on your computer
pip install ecoindex-scraper
You can run a page analysis by calling the function get_page_analysis()
:
(function) get_page_analysis: (url: HttpUrl, window_size: WindowSize | None = WindowSize(width=1920, height=1080), wait_before_scroll: int | None = 1, wait_after_scroll: int | None = 1) -> Coroutine[Any, Any, Result]
Example:
import asyncio
from pprint import pprint
from ecoindex_scraper.scrap import EcoindexScraper
pprint(
asyncio.run(
EcoindexScraper(url="http://ecoindex.fr")
.init_chromedriver()
.get_page_analysis()
)
)
Result example:
Result(width=1920, height=1080, url=HttpUrl('http://ecoindex.fr', ), size=549.253, nodes=52, requests=12, grade='A', score=90.0, ges=1.2, water=1.8, ecoindex_version='5.0.0', date=datetime.datetime(2022, 9, 12, 10, 54, 46, 773443), page_type=None)
Default behaviour: By default, the page analysis simulates:
- Uses the last version of chrome (can be set with parameter
chrome_version_main
to a given version. IE107
)- Window size of 1920x1080 pixels (can be set with parameter
window_size
)- Wait for 1 second when page is loaded (can be set with parameter
wait_before_scroll
)- Scroll to the bottom of the page (if it is possible)
- Wait for 1 second after having scrolled to the bottom of the page (can be set with parameter
wait_after_scroll
)
It is possible to generate a screenshot of the analyzed page by adding a ScreenShot
property to the EcoindexScraper
object.
You have to define an id (can be a string, but it is recommended to use a unique id) and a path to the screenshot file (if the folder does not exist, it will be created).
import asyncio
from pprint import pprint
from uuid import uuid1
from ecoindex.models import ScreenShot
from ecoindex_scraper.scrap import EcoindexScraper
pprint(
asyncio.run(
EcoindexScraper(
url="http://www.ecoindex.fr/",
screenshot=ScreenShot(id=str(uuid1()), folder="./screenshots"),
)
.init_chromedriver()
.get_page_analysis()
)
)
You can also run the analysis asynchronously:
import asyncio
from concurrent.futures import ThreadPoolExecutor, as_completed
from ecoindex_scraper.scrap import EcoindexScraper
def run_page_analysis(url):
return asyncio.run(
EcoindexScraper(url=url)
.init_chromedriver()
.get_page_analysis()
)
with ThreadPoolExecutor(max_workers=8) as executor:
future_to_analysis = {}
url = "https://www.ecoindex.fr"
for i in range(10):
future_to_analysis[
executor.submit(
run_page_analysis,
url,
)
] = (url)
for future in as_completed(future_to_analysis):
try:
print(future.result())
except Exception as e:
print(e)
Note: In this case, it is highly recommanded to use a fixed
chromedriver
version. You can set it with the parameterchrome_version_main
(IE107
) anddriver_executable_path
(IE/usr/bin/chromedriver
). Otherwise undected-chromedriver will download the latest version of chromedriver and patch it for each analysis.
You need poetry to install and manage dependencies. Once poetry installed, run :
poetry install
poetry run pytest
The LCA values used by ecoindex_scraper to evaluate environmental impacts are not under free license - ©Frédéric Bordage Please also refer to the mentions provided in the code files for specifics on the IP regime.