• Home
  • Raw
  • Download

Lines Matching refs:urllib2

4   HOWTO Fetch Internet Resources Using urllib2
12 HOWTO, available at `urllib2 - Le Manuel manquant
29 **urllib2** is a Python module for fetching URLs
36 urllib2 supports fetching URLs for many "URL schemes" (identified by the string
45 not intended to be easy to read. This HOWTO aims to illustrate using *urllib2*,
47 the :mod:`urllib2` docs, but is supplementary to them.
53 The simplest way to use urllib2 is as follows::
55 import urllib2
56 response = urllib2.urlopen('http://python.org/')
59 Many uses of urllib2 will be that simple (note that instead of an 'http:' URL we
65 send responses. urllib2 mirrors this with a ``Request`` object which represents
72 import urllib2
74 req = urllib2.Request('http://www.voidspace.org.uk')
75 response = urllib2.urlopen(req)
78 Note that urllib2 makes use of the same Request interface to handle all URL
81 req = urllib2.Request('ftp://example.com/')
100 *not* from ``urllib2``. ::
103 import urllib2
111 req = urllib2.Request(url, data)
112 response = urllib2.urlopen(req)
120 If you do not pass the ``data`` argument, urllib2 uses a **GET** request. One
132 >>> import urllib2
143 >>> data = urllib2.urlopen(full_url)
155 to different browsers [#]_. By default urllib2 identifies itself as
166 import urllib2
176 req = urllib2.Request(url, data, headers)
177 response = urllib2.urlopen(req)
204 >>> req = urllib2.Request('http://www.pretend_server.org')
205 >>> try: urllib2.urlopen(req)
206 ... except urllib2.URLError as e:
219 a different URL, urllib2 will handle that for you). For those it can't handle,
312 >>> req = urllib2.Request('http://www.python.org/fish.html')
314 ... urllib2.urlopen(req)
315 ... except urllib2.HTTPError as e:
339 from urllib2 import Request, urlopen, URLError, HTTPError
363 from urllib2 import Request, urlopen, URLError
402 confusingly-named :class:`urllib2.OpenerDirector`). Normally we have been using
471 password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
478 handler = urllib2.HTTPBasicAuthHandler(password_mgr)
481 opener = urllib2.build_opener(handler)
487 # Now all calls to urllib2.urlopen use our opener.
488 urllib2.install_opener(opener)
511 **urllib2** will auto-detect your proxy settings and use those. This is through
518 >>> proxy_support = urllib2.ProxyHandler({})
519 >>> opener = urllib2.build_opener(proxy_support)
520 >>> urllib2.install_opener(opener)
524 Currently ``urllib2`` *does not* support fetching of ``https`` locations
525 through a proxy. However, this can be enabled by extending urllib2 as
537 The Python support for fetching resources from the web is layered. urllib2 uses
543 the socket timeout is not exposed at the httplib or urllib2 levels. However,
547 import urllib2
553 # this call to urllib2.urlopen now uses the default timeout
555 req = urllib2.Request('http://www.voidspace.org.uk')
556 response = urllib2.urlopen(req)
579 is set to use the proxy, which urllib2 picks up on. In order to test
580 scripts with a localhost server, I have to prevent urllib2 from using
582 .. [#] urllib2 opener for SSL proxy (CONNECT method): `ASPN Cookbook Recipe