forked from SeleniumHQ/docs
-
Notifications
You must be signed in to change notification settings - Fork 0
/
worst.html
143 lines (114 loc) · 6.13 KB
/
worst.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
<!doctype html>
<meta charset=utf-8>
<title>Worst Practices</title>
<link rel=stylesheet href=se.css>
<link rel=prev href=best.html title="Best Practices">
<link rel=next href=grid.html title=Grid>
<script src=docs.js></script>
<h1>Worst Practices</h1>
<h2>Captchas</h2>
<p>CAPTCHA, short for *Completely Automated Public Turing test to tell
Computers and Humans Apart*, are explicitly designed to prevent
automations, so don't try! There are two primary strategies to get
around CAPTCHA checks:
<ul>
<li>Disable CAPTCHAs in your test environments
<li>Add a hook to allow tests to bypass the CAPTCHA
</ul>
<h2>File Downloads</h2>
<h2>HTTP Response Codes</h2>
<p>For some browser configurations in Selenium RC, Selenium acted as a
proxy between the browser and the site being automated. This meant
that all browser traffic passed through Selenium, and could be
captured or manipulated. The ``captureNetworkTraffic()`` method
purported to capture all of the network traffic between the browser
and the site being automated, including HTTP response codes.
<p>Selenium WebDriver is a completely different approach to browser
automation, preferring to act more like a user and this is represented
in the way you write tests with WebDriver. In automated functional
testing, checking the status code is not a particularly important
detail of a test's failure; the steps that preceded it are more
important.
<p>The browser will always represent the HTTP status code, imagine for
example a 404 or a 500 error page. A simple way to “fail fast” when
you encounter one of these error pages is to check the page title or
content of a reliable point (e.g. the <h1> tag) after every page
load. If you are using the page object model you can include this
check in your class constructor or similar point where the page load
is expected. Occasionally the HTTP code may even be represented in
the browser's error page and you could use WebDriver to read this and
improve your debugging output.
<p>Checking the webpage itself is inline with WebDriver's ideal practice
of representing and asserting upon the user's view of the website.
<p>If you insist, an advanced solution to capturing HTTP status codes is
to replicate the behavior of Selenium RC by using a proxy. WebDriver
API provides the ability to set a proxy for the browser, and there are
a number of proxies that programmatically allow you to manipulate the
contents of requests sent to and received from the web server. Using
a proxy lets you decide how you want to respond to redirection
response codes. Additionally, not every browser makes the response
codes available to WebDriver, so opting to use a proxy allows you to
have a solution that works for every browser.
<h2>Gmail, Email, and Facebook Logins</h2>
<p>For multiple reasons logging into sites like Gmail and Facebook using
WebDriver is not recommended. Aside from being against the usage terms
for these sites (where you risk having the account shut down), it is
slow and unreliable. Not what we want where test stability is
important.
<p>The ideal practice is to use the APIs that email providers offer, or
in the case of facebook the developer tools service which exposes an
API for creating test accounts, friends and so forth. Although using
an API might seem like a bit extra hard work you will be paid back in
speed, reliabilty and stability. The API is also unlikely to change
whereas webpages and HTML locators change often and require you to
update your test framework.
<p>Logging in to 3rd-party sites using WebDriver at any point of your
test increases the risk of your test failing because it makes your
test longer. A general rule of thumb is that longer tests are more
fragile and unreliable.
<h2>Test Dependency</h2>
<p>
A common idea and misconception about automated testing is regarding a
specific test order. Your tests should be able to run in <strong>any</strong> order,
and not rely on other tests to complete in order to be successful.
</p>
<h2>Performance Testing</h2>
<p>Performance testing using Selenium and WebDriver is generally not
advised. Not because it is incapable but because it is not optimised
for the job and this you are unlikely to get good results.
<p>It may seem ideal to performance test in the context of the user but a
suite of WebDriver tests are subjected to lots of points of external
fragility which is beyond your control; for example Browser startup
speed, speed of HTTP servers, response of 3rd party servers that host
Javascript or CSS. Variation at these points will cause variation in
your results. It is very difficult to separate the difference between
the performance of your website and the performance of external
resources. As WebDriver is only an API you will need to develop this
reporting yourself.
<p>The other potential attraction is 'saving time' - performing
functional and performance tests at the same time. However functional
and performance tests have opposing objectives. To test functionality
a test may need to be patient and wait for loading but this will cloud
the performance testing results and vice versa.
<p>To improve the performance of your website you will need to be able to
analyse overall performance independent of environment differences,
identify poor code practices, breakdown of performance of individual
resources (ie css or javascript) in order to know what to
improve. There are performance testing tools available that can do
this job, provide reporting and analysis and even make improvement
suggestions.
<p>Example (open source) packages to use are: Jmeter ?
<h2>Link Spidering</h2>
<p>
Using Selenium to spider through links is not a recommended practice
not because it can't be done, but because it's definitely not the most
ideal tool. Selenium needs time to startup, and can take several seconds
up to a minute depending on how your test is written, just to get to the page
and traverse through the DOM.
</p>
<p>
Instead of using Selenium for this, you could save a ton of time by executing a
curl command, or using a library such as BeautifulSoup since these methods don't
rely on creating a browser and navigating to a page. You are saving tons of time
by not using Selenium for this task.
</p>